Test Report: KVM_Linux_crio 19364

                    
                      25094c99c11af6abe50820a6398a27b4b8dd70b0:2024-08-04:35633
                    
                

Test fail (30/320)

Order failed test Duration
43 TestAddons/parallel/Ingress 154.14
45 TestAddons/parallel/MetricsServer 322.54
54 TestAddons/StoppedEnableDisable 154.42
173 TestMultiControlPlane/serial/StopSecondaryNode 142.12
175 TestMultiControlPlane/serial/RestartSecondaryNode 59.04
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 383.16
180 TestMultiControlPlane/serial/StopCluster 141.99
240 TestMultiNode/serial/RestartKeepsNodes 324.27
242 TestMultiNode/serial/StopMultiNode 141.24
249 TestPreload 244.1
257 TestKubernetesUpgrade 444.94
263 TestPause/serial/SecondStartNoReconfiguration 63.84
287 TestStartStop/group/old-k8s-version/serial/FirstStart 288.66
301 TestStartStop/group/embed-certs/serial/Stop 139.13
306 TestStartStop/group/no-preload/serial/Stop 139.02
307 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 95.44
311 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.14
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
316 TestStartStop/group/old-k8s-version/serial/SecondStart 770.19
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.42
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.35
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.51
324 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.72
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 502.35
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 381.19
327 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 370.85
328 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 102.23
x
+
TestAddons/parallel/Ingress (154.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-110246 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-110246 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-110246 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [df82dc23-6a96-474c-90c3-83927b83004d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [df82dc23-6a96-474c-90c3-83927b83004d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003947706s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-110246 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.205229695s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-110246 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.9
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-110246 addons disable ingress-dns --alsologtostderr -v=1: (1.282867142s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-110246 addons disable ingress --alsologtostderr -v=1: (7.670283754s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-110246 -n addons-110246
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-110246 logs -n 25: (1.207025037s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-312107                                                                     | download-only-312107 | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| delete  | -p download-only-598666                                                                     | download-only-598666 | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-308590 | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC |                     |
	|         | binary-mirror-308590                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37089                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-308590                                                                     | binary-mirror-308590 | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| addons  | enable dashboard -p                                                                         | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC |                     |
	|         | addons-110246                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC |                     |
	|         | addons-110246                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-110246 --wait=true                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:53 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:53 UTC | 03 Aug 24 22:53 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:53 UTC | 03 Aug 24 22:53 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-110246 ssh cat                                                                       | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:53 UTC | 03 Aug 24 22:53 UTC |
	|         | /opt/local-path-provisioner/pvc-35102428-567b-4022-9a55-8047dad0f959_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-110246 ip                                                                            | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | addons-110246                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | -p addons-110246                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | addons-110246                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | -p addons-110246                                                                            |                      |         |         |                     |                     |
	| addons  | addons-110246 addons                                                                        | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:55 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-110246 ssh curl -s                                                                   | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:55 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-110246 addons                                                                        | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:55 UTC | 03 Aug 24 22:55 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-110246 ip                                                                            | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:57 UTC | 03 Aug 24 22:57 UTC |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:57 UTC | 03 Aug 24 22:57 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:57 UTC | 03 Aug 24 22:57 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 22:49:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 22:49:52.279620   18056 out.go:291] Setting OutFile to fd 1 ...
	I0803 22:49:52.279827   18056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:49:52.279835   18056 out.go:304] Setting ErrFile to fd 2...
	I0803 22:49:52.279840   18056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:49:52.279989   18056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 22:49:52.280546   18056 out.go:298] Setting JSON to false
	I0803 22:49:52.281340   18056 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1936,"bootTime":1722723456,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 22:49:52.281416   18056 start.go:139] virtualization: kvm guest
	I0803 22:49:52.283451   18056 out.go:177] * [addons-110246] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 22:49:52.285015   18056 notify.go:220] Checking for updates...
	I0803 22:49:52.285031   18056 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 22:49:52.286428   18056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 22:49:52.287781   18056 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 22:49:52.289248   18056 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 22:49:52.290629   18056 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 22:49:52.292048   18056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 22:49:52.293652   18056 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 22:49:52.325152   18056 out.go:177] * Using the kvm2 driver based on user configuration
	I0803 22:49:52.326571   18056 start.go:297] selected driver: kvm2
	I0803 22:49:52.326587   18056 start.go:901] validating driver "kvm2" against <nil>
	I0803 22:49:52.326609   18056 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 22:49:52.327255   18056 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 22:49:52.327320   18056 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 22:49:52.342294   18056 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 22:49:52.342338   18056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 22:49:52.342535   18056 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 22:49:52.342590   18056 cni.go:84] Creating CNI manager for ""
	I0803 22:49:52.342602   18056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 22:49:52.342611   18056 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 22:49:52.342672   18056 start.go:340] cluster config:
	{Name:addons-110246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-110246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 22:49:52.342775   18056 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 22:49:52.344508   18056 out.go:177] * Starting "addons-110246" primary control-plane node in "addons-110246" cluster
	I0803 22:49:52.346107   18056 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 22:49:52.346151   18056 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 22:49:52.346158   18056 cache.go:56] Caching tarball of preloaded images
	I0803 22:49:52.346230   18056 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 22:49:52.346240   18056 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 22:49:52.346530   18056 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/config.json ...
	I0803 22:49:52.346548   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/config.json: {Name:mk05bdfa1b646526b5412bf86d27a9b4efa97e10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:52.346674   18056 start.go:360] acquireMachinesLock for addons-110246: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 22:49:52.346717   18056 start.go:364] duration metric: took 30.605µs to acquireMachinesLock for "addons-110246"
	I0803 22:49:52.346733   18056 start.go:93] Provisioning new machine with config: &{Name:addons-110246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-110246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 22:49:52.346781   18056 start.go:125] createHost starting for "" (driver="kvm2")
	I0803 22:49:52.348505   18056 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0803 22:49:52.348626   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:49:52.348668   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:49:52.363490   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46585
	I0803 22:49:52.363977   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:49:52.364561   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:49:52.364591   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:49:52.364954   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:49:52.365146   18056 main.go:141] libmachine: (addons-110246) Calling .GetMachineName
	I0803 22:49:52.365305   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:49:52.365456   18056 start.go:159] libmachine.API.Create for "addons-110246" (driver="kvm2")
	I0803 22:49:52.365484   18056 client.go:168] LocalClient.Create starting
	I0803 22:49:52.365527   18056 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0803 22:49:52.506151   18056 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0803 22:49:52.600941   18056 main.go:141] libmachine: Running pre-create checks...
	I0803 22:49:52.600964   18056 main.go:141] libmachine: (addons-110246) Calling .PreCreateCheck
	I0803 22:49:52.601528   18056 main.go:141] libmachine: (addons-110246) Calling .GetConfigRaw
	I0803 22:49:52.601967   18056 main.go:141] libmachine: Creating machine...
	I0803 22:49:52.601981   18056 main.go:141] libmachine: (addons-110246) Calling .Create
	I0803 22:49:52.602204   18056 main.go:141] libmachine: (addons-110246) Creating KVM machine...
	I0803 22:49:52.603240   18056 main.go:141] libmachine: (addons-110246) DBG | found existing default KVM network
	I0803 22:49:52.604092   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:52.603971   18078 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012ed90}
	I0803 22:49:52.604175   18056 main.go:141] libmachine: (addons-110246) DBG | created network xml: 
	I0803 22:49:52.604200   18056 main.go:141] libmachine: (addons-110246) DBG | <network>
	I0803 22:49:52.604211   18056 main.go:141] libmachine: (addons-110246) DBG |   <name>mk-addons-110246</name>
	I0803 22:49:52.604221   18056 main.go:141] libmachine: (addons-110246) DBG |   <dns enable='no'/>
	I0803 22:49:52.604230   18056 main.go:141] libmachine: (addons-110246) DBG |   
	I0803 22:49:52.604243   18056 main.go:141] libmachine: (addons-110246) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0803 22:49:52.604251   18056 main.go:141] libmachine: (addons-110246) DBG |     <dhcp>
	I0803 22:49:52.604260   18056 main.go:141] libmachine: (addons-110246) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0803 22:49:52.604268   18056 main.go:141] libmachine: (addons-110246) DBG |     </dhcp>
	I0803 22:49:52.604273   18056 main.go:141] libmachine: (addons-110246) DBG |   </ip>
	I0803 22:49:52.604280   18056 main.go:141] libmachine: (addons-110246) DBG |   
	I0803 22:49:52.604285   18056 main.go:141] libmachine: (addons-110246) DBG | </network>
	I0803 22:49:52.604302   18056 main.go:141] libmachine: (addons-110246) DBG | 
	I0803 22:49:52.609630   18056 main.go:141] libmachine: (addons-110246) DBG | trying to create private KVM network mk-addons-110246 192.168.39.0/24...
	I0803 22:49:52.673239   18056 main.go:141] libmachine: (addons-110246) DBG | private KVM network mk-addons-110246 192.168.39.0/24 created
	I0803 22:49:52.673272   18056 main.go:141] libmachine: (addons-110246) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246 ...
	I0803 22:49:52.673285   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:52.673212   18078 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 22:49:52.673332   18056 main.go:141] libmachine: (addons-110246) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 22:49:52.673386   18056 main.go:141] libmachine: (addons-110246) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 22:49:52.931705   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:52.931599   18078 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa...
	I0803 22:49:53.077689   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:53.077587   18078 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/addons-110246.rawdisk...
	I0803 22:49:53.077730   18056 main.go:141] libmachine: (addons-110246) DBG | Writing magic tar header
	I0803 22:49:53.077744   18056 main.go:141] libmachine: (addons-110246) DBG | Writing SSH key tar header
	I0803 22:49:53.077755   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:53.077699   18078 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246 ...
	I0803 22:49:53.077840   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246
	I0803 22:49:53.077882   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0803 22:49:53.077895   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 22:49:53.077905   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0803 22:49:53.077912   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 22:49:53.077934   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home/jenkins
	I0803 22:49:53.077949   18056 main.go:141] libmachine: (addons-110246) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246 (perms=drwx------)
	I0803 22:49:53.077961   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home
	I0803 22:49:53.077975   18056 main.go:141] libmachine: (addons-110246) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0803 22:49:53.077993   18056 main.go:141] libmachine: (addons-110246) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0803 22:49:53.078004   18056 main.go:141] libmachine: (addons-110246) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0803 22:49:53.078010   18056 main.go:141] libmachine: (addons-110246) DBG | Skipping /home - not owner
	I0803 22:49:53.078024   18056 main.go:141] libmachine: (addons-110246) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 22:49:53.078049   18056 main.go:141] libmachine: (addons-110246) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 22:49:53.078071   18056 main.go:141] libmachine: (addons-110246) Creating domain...
	I0803 22:49:53.079116   18056 main.go:141] libmachine: (addons-110246) define libvirt domain using xml: 
	I0803 22:49:53.079133   18056 main.go:141] libmachine: (addons-110246) <domain type='kvm'>
	I0803 22:49:53.079141   18056 main.go:141] libmachine: (addons-110246)   <name>addons-110246</name>
	I0803 22:49:53.079146   18056 main.go:141] libmachine: (addons-110246)   <memory unit='MiB'>4000</memory>
	I0803 22:49:53.079152   18056 main.go:141] libmachine: (addons-110246)   <vcpu>2</vcpu>
	I0803 22:49:53.079160   18056 main.go:141] libmachine: (addons-110246)   <features>
	I0803 22:49:53.079168   18056 main.go:141] libmachine: (addons-110246)     <acpi/>
	I0803 22:49:53.079179   18056 main.go:141] libmachine: (addons-110246)     <apic/>
	I0803 22:49:53.079206   18056 main.go:141] libmachine: (addons-110246)     <pae/>
	I0803 22:49:53.079230   18056 main.go:141] libmachine: (addons-110246)     
	I0803 22:49:53.079245   18056 main.go:141] libmachine: (addons-110246)   </features>
	I0803 22:49:53.079258   18056 main.go:141] libmachine: (addons-110246)   <cpu mode='host-passthrough'>
	I0803 22:49:53.079272   18056 main.go:141] libmachine: (addons-110246)   
	I0803 22:49:53.079297   18056 main.go:141] libmachine: (addons-110246)   </cpu>
	I0803 22:49:53.079310   18056 main.go:141] libmachine: (addons-110246)   <os>
	I0803 22:49:53.079325   18056 main.go:141] libmachine: (addons-110246)     <type>hvm</type>
	I0803 22:49:53.079337   18056 main.go:141] libmachine: (addons-110246)     <boot dev='cdrom'/>
	I0803 22:49:53.079355   18056 main.go:141] libmachine: (addons-110246)     <boot dev='hd'/>
	I0803 22:49:53.079375   18056 main.go:141] libmachine: (addons-110246)     <bootmenu enable='no'/>
	I0803 22:49:53.079389   18056 main.go:141] libmachine: (addons-110246)   </os>
	I0803 22:49:53.079400   18056 main.go:141] libmachine: (addons-110246)   <devices>
	I0803 22:49:53.079413   18056 main.go:141] libmachine: (addons-110246)     <disk type='file' device='cdrom'>
	I0803 22:49:53.079424   18056 main.go:141] libmachine: (addons-110246)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/boot2docker.iso'/>
	I0803 22:49:53.079436   18056 main.go:141] libmachine: (addons-110246)       <target dev='hdc' bus='scsi'/>
	I0803 22:49:53.079454   18056 main.go:141] libmachine: (addons-110246)       <readonly/>
	I0803 22:49:53.079467   18056 main.go:141] libmachine: (addons-110246)     </disk>
	I0803 22:49:53.079482   18056 main.go:141] libmachine: (addons-110246)     <disk type='file' device='disk'>
	I0803 22:49:53.079496   18056 main.go:141] libmachine: (addons-110246)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 22:49:53.079510   18056 main.go:141] libmachine: (addons-110246)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/addons-110246.rawdisk'/>
	I0803 22:49:53.079519   18056 main.go:141] libmachine: (addons-110246)       <target dev='hda' bus='virtio'/>
	I0803 22:49:53.079537   18056 main.go:141] libmachine: (addons-110246)     </disk>
	I0803 22:49:53.079549   18056 main.go:141] libmachine: (addons-110246)     <interface type='network'>
	I0803 22:49:53.079577   18056 main.go:141] libmachine: (addons-110246)       <source network='mk-addons-110246'/>
	I0803 22:49:53.079590   18056 main.go:141] libmachine: (addons-110246)       <model type='virtio'/>
	I0803 22:49:53.079605   18056 main.go:141] libmachine: (addons-110246)     </interface>
	I0803 22:49:53.079625   18056 main.go:141] libmachine: (addons-110246)     <interface type='network'>
	I0803 22:49:53.079640   18056 main.go:141] libmachine: (addons-110246)       <source network='default'/>
	I0803 22:49:53.079652   18056 main.go:141] libmachine: (addons-110246)       <model type='virtio'/>
	I0803 22:49:53.079667   18056 main.go:141] libmachine: (addons-110246)     </interface>
	I0803 22:49:53.079679   18056 main.go:141] libmachine: (addons-110246)     <serial type='pty'>
	I0803 22:49:53.079693   18056 main.go:141] libmachine: (addons-110246)       <target port='0'/>
	I0803 22:49:53.079751   18056 main.go:141] libmachine: (addons-110246)     </serial>
	I0803 22:49:53.079773   18056 main.go:141] libmachine: (addons-110246)     <console type='pty'>
	I0803 22:49:53.079782   18056 main.go:141] libmachine: (addons-110246)       <target type='serial' port='0'/>
	I0803 22:49:53.079790   18056 main.go:141] libmachine: (addons-110246)     </console>
	I0803 22:49:53.079801   18056 main.go:141] libmachine: (addons-110246)     <rng model='virtio'>
	I0803 22:49:53.079813   18056 main.go:141] libmachine: (addons-110246)       <backend model='random'>/dev/random</backend>
	I0803 22:49:53.079823   18056 main.go:141] libmachine: (addons-110246)     </rng>
	I0803 22:49:53.079834   18056 main.go:141] libmachine: (addons-110246)     
	I0803 22:49:53.079847   18056 main.go:141] libmachine: (addons-110246)     
	I0803 22:49:53.079859   18056 main.go:141] libmachine: (addons-110246)   </devices>
	I0803 22:49:53.079869   18056 main.go:141] libmachine: (addons-110246) </domain>
	I0803 22:49:53.079882   18056 main.go:141] libmachine: (addons-110246) 
	I0803 22:49:53.087411   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:c9:f9:f7 in network default
	I0803 22:49:53.087929   18056 main.go:141] libmachine: (addons-110246) Ensuring networks are active...
	I0803 22:49:53.087946   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:53.088497   18056 main.go:141] libmachine: (addons-110246) Ensuring network default is active
	I0803 22:49:53.088746   18056 main.go:141] libmachine: (addons-110246) Ensuring network mk-addons-110246 is active
	I0803 22:49:53.089175   18056 main.go:141] libmachine: (addons-110246) Getting domain xml...
	I0803 22:49:53.089760   18056 main.go:141] libmachine: (addons-110246) Creating domain...
	I0803 22:49:54.490903   18056 main.go:141] libmachine: (addons-110246) Waiting to get IP...
	I0803 22:49:54.491533   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:54.491900   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:54.491937   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:54.491889   18078 retry.go:31] will retry after 267.27459ms: waiting for machine to come up
	I0803 22:49:54.760252   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:54.760642   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:54.760669   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:54.760623   18078 retry.go:31] will retry after 261.053928ms: waiting for machine to come up
	I0803 22:49:55.023001   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:55.023448   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:55.023481   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:55.023387   18078 retry.go:31] will retry after 412.486886ms: waiting for machine to come up
	I0803 22:49:55.437979   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:55.438333   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:55.438360   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:55.438292   18078 retry.go:31] will retry after 434.715844ms: waiting for machine to come up
	I0803 22:49:55.874814   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:55.875239   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:55.875265   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:55.875198   18078 retry.go:31] will retry after 695.404352ms: waiting for machine to come up
	I0803 22:49:56.571963   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:56.572400   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:56.572464   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:56.572383   18078 retry.go:31] will retry after 754.799097ms: waiting for machine to come up
	I0803 22:49:57.328265   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:57.328630   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:57.328651   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:57.328585   18078 retry.go:31] will retry after 1.183910018s: waiting for machine to come up
	I0803 22:49:58.514144   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:58.514575   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:58.514602   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:58.514526   18078 retry.go:31] will retry after 896.961741ms: waiting for machine to come up
	I0803 22:49:59.412464   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:59.412877   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:59.412907   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:59.412834   18078 retry.go:31] will retry after 1.510555878s: waiting for machine to come up
	I0803 22:50:00.924491   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:00.924867   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:50:00.924894   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:50:00.924817   18078 retry.go:31] will retry after 1.431660453s: waiting for machine to come up
	I0803 22:50:02.358655   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:02.359160   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:50:02.359228   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:50:02.359122   18078 retry.go:31] will retry after 2.531171158s: waiting for machine to come up
	I0803 22:50:04.893392   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:04.893870   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:50:04.893891   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:50:04.893781   18078 retry.go:31] will retry after 2.446062618s: waiting for machine to come up
	I0803 22:50:07.343233   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:07.343603   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:50:07.343625   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:50:07.343552   18078 retry.go:31] will retry after 3.161483574s: waiting for machine to come up
	I0803 22:50:10.509040   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:10.509421   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:50:10.509449   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:50:10.509381   18078 retry.go:31] will retry after 4.924124516s: waiting for machine to come up
	I0803 22:50:15.437464   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.437965   18056 main.go:141] libmachine: (addons-110246) Found IP for machine: 192.168.39.9
	I0803 22:50:15.437992   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has current primary IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.437997   18056 main.go:141] libmachine: (addons-110246) Reserving static IP address...
	I0803 22:50:15.438416   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find host DHCP lease matching {name: "addons-110246", mac: "52:54:00:da:10:f7", ip: "192.168.39.9"} in network mk-addons-110246
	I0803 22:50:15.510127   18056 main.go:141] libmachine: (addons-110246) DBG | Getting to WaitForSSH function...
	I0803 22:50:15.510168   18056 main.go:141] libmachine: (addons-110246) Reserved static IP address: 192.168.39.9
	I0803 22:50:15.510210   18056 main.go:141] libmachine: (addons-110246) Waiting for SSH to be available...
	I0803 22:50:15.513059   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.513478   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:15.513504   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.513613   18056 main.go:141] libmachine: (addons-110246) DBG | Using SSH client type: external
	I0803 22:50:15.513644   18056 main.go:141] libmachine: (addons-110246) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa (-rw-------)
	I0803 22:50:15.513686   18056 main.go:141] libmachine: (addons-110246) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 22:50:15.513701   18056 main.go:141] libmachine: (addons-110246) DBG | About to run SSH command:
	I0803 22:50:15.513712   18056 main.go:141] libmachine: (addons-110246) DBG | exit 0
	I0803 22:50:15.645499   18056 main.go:141] libmachine: (addons-110246) DBG | SSH cmd err, output: <nil>: 
	I0803 22:50:15.645758   18056 main.go:141] libmachine: (addons-110246) KVM machine creation complete!
	I0803 22:50:15.646256   18056 main.go:141] libmachine: (addons-110246) Calling .GetConfigRaw
	I0803 22:50:15.646784   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:15.646971   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:15.647126   18056 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 22:50:15.647142   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:15.648534   18056 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 22:50:15.648560   18056 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 22:50:15.648566   18056 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 22:50:15.648572   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:15.650728   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.651042   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:15.651063   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.651238   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:15.651405   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.651530   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.651659   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:15.651969   18056 main.go:141] libmachine: Using SSH client type: native
	I0803 22:50:15.652208   18056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0803 22:50:15.652222   18056 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 22:50:15.756654   18056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 22:50:15.756675   18056 main.go:141] libmachine: Detecting the provisioner...
	I0803 22:50:15.756686   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:15.759594   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.759951   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:15.759974   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.760112   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:15.760294   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.760429   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.760536   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:15.760699   18056 main.go:141] libmachine: Using SSH client type: native
	I0803 22:50:15.760861   18056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0803 22:50:15.760871   18056 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 22:50:15.870407   18056 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 22:50:15.870495   18056 main.go:141] libmachine: found compatible host: buildroot
	I0803 22:50:15.870509   18056 main.go:141] libmachine: Provisioning with buildroot...
	I0803 22:50:15.870522   18056 main.go:141] libmachine: (addons-110246) Calling .GetMachineName
	I0803 22:50:15.870754   18056 buildroot.go:166] provisioning hostname "addons-110246"
	I0803 22:50:15.870776   18056 main.go:141] libmachine: (addons-110246) Calling .GetMachineName
	I0803 22:50:15.870961   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:15.873637   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.873973   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:15.874123   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.874349   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:15.874578   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.874720   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.874856   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:15.874986   18056 main.go:141] libmachine: Using SSH client type: native
	I0803 22:50:15.875152   18056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0803 22:50:15.875165   18056 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-110246 && echo "addons-110246" | sudo tee /etc/hostname
	I0803 22:50:15.995884   18056 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-110246
	
	I0803 22:50:15.995911   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:15.998899   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.999354   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:15.999382   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.999593   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:15.999771   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.999933   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.000029   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.000153   18056 main.go:141] libmachine: Using SSH client type: native
	I0803 22:50:16.000377   18056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0803 22:50:16.000401   18056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-110246' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-110246/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-110246' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 22:50:16.115270   18056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 22:50:16.115303   18056 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 22:50:16.115353   18056 buildroot.go:174] setting up certificates
	I0803 22:50:16.115365   18056 provision.go:84] configureAuth start
	I0803 22:50:16.115377   18056 main.go:141] libmachine: (addons-110246) Calling .GetMachineName
	I0803 22:50:16.115713   18056 main.go:141] libmachine: (addons-110246) Calling .GetIP
	I0803 22:50:16.118425   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.118768   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.118797   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.118930   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.120988   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.121215   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.121247   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.121422   18056 provision.go:143] copyHostCerts
	I0803 22:50:16.121507   18056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 22:50:16.121656   18056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 22:50:16.121755   18056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 22:50:16.121840   18056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.addons-110246 san=[127.0.0.1 192.168.39.9 addons-110246 localhost minikube]
	I0803 22:50:16.298973   18056 provision.go:177] copyRemoteCerts
	I0803 22:50:16.299035   18056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 22:50:16.299073   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.301748   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.302096   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.302124   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.302331   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:16.302517   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.302655   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.302773   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:16.387745   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 22:50:16.413946   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0803 22:50:16.438400   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 22:50:16.463234   18056 provision.go:87] duration metric: took 347.854825ms to configureAuth
	I0803 22:50:16.463261   18056 buildroot.go:189] setting minikube options for container-runtime
	I0803 22:50:16.463456   18056 config.go:182] Loaded profile config "addons-110246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 22:50:16.463540   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.466369   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.466656   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.466684   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.466885   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:16.467063   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.467211   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.467343   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.467473   18056 main.go:141] libmachine: Using SSH client type: native
	I0803 22:50:16.467647   18056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0803 22:50:16.467666   18056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 22:50:16.744434   18056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 22:50:16.744459   18056 main.go:141] libmachine: Checking connection to Docker...
	I0803 22:50:16.744467   18056 main.go:141] libmachine: (addons-110246) Calling .GetURL
	I0803 22:50:16.745832   18056 main.go:141] libmachine: (addons-110246) DBG | Using libvirt version 6000000
	I0803 22:50:16.747946   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.748275   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.748298   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.748435   18056 main.go:141] libmachine: Docker is up and running!
	I0803 22:50:16.748454   18056 main.go:141] libmachine: Reticulating splines...
	I0803 22:50:16.748461   18056 client.go:171] duration metric: took 24.38296815s to LocalClient.Create
	I0803 22:50:16.748487   18056 start.go:167] duration metric: took 24.383031419s to libmachine.API.Create "addons-110246"
	I0803 22:50:16.748501   18056 start.go:293] postStartSetup for "addons-110246" (driver="kvm2")
	I0803 22:50:16.748517   18056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 22:50:16.748540   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:16.748778   18056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 22:50:16.748801   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.750881   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.751233   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.751253   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.751386   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:16.751577   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.751714   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.751843   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:16.835815   18056 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 22:50:16.840065   18056 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 22:50:16.840106   18056 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 22:50:16.840191   18056 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 22:50:16.840219   18056 start.go:296] duration metric: took 91.709209ms for postStartSetup
	I0803 22:50:16.840251   18056 main.go:141] libmachine: (addons-110246) Calling .GetConfigRaw
	I0803 22:50:16.840752   18056 main.go:141] libmachine: (addons-110246) Calling .GetIP
	I0803 22:50:16.843193   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.843564   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.843585   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.843807   18056 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/config.json ...
	I0803 22:50:16.844001   18056 start.go:128] duration metric: took 24.4972092s to createHost
	I0803 22:50:16.844035   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.846376   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.846681   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.846702   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.846833   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:16.847003   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.847132   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.847233   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.847342   18056 main.go:141] libmachine: Using SSH client type: native
	I0803 22:50:16.847487   18056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0803 22:50:16.847496   18056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 22:50:16.954226   18056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722725416.929233016
	
	I0803 22:50:16.954248   18056 fix.go:216] guest clock: 1722725416.929233016
	I0803 22:50:16.954257   18056 fix.go:229] Guest: 2024-08-03 22:50:16.929233016 +0000 UTC Remote: 2024-08-03 22:50:16.844021543 +0000 UTC m=+24.597637724 (delta=85.211473ms)
	I0803 22:50:16.954304   18056 fix.go:200] guest clock delta is within tolerance: 85.211473ms
	I0803 22:50:16.954315   18056 start.go:83] releasing machines lock for "addons-110246", held for 24.607588326s
	I0803 22:50:16.954345   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:16.954614   18056 main.go:141] libmachine: (addons-110246) Calling .GetIP
	I0803 22:50:16.957009   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.957390   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.957419   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.957548   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:16.958099   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:16.958264   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:16.958375   18056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 22:50:16.958417   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.958455   18056 ssh_runner.go:195] Run: cat /version.json
	I0803 22:50:16.958476   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.960999   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.961093   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.961410   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.961438   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.961460   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.961538   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.961630   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:16.961918   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:16.961927   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.962119   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.962134   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.962274   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:16.962287   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.962440   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:17.059869   18056 ssh_runner.go:195] Run: systemctl --version
	I0803 22:50:17.065951   18056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 22:50:17.227303   18056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 22:50:17.233292   18056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 22:50:17.233366   18056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 22:50:17.249908   18056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 22:50:17.249928   18056 start.go:495] detecting cgroup driver to use...
	I0803 22:50:17.249999   18056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 22:50:17.269060   18056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 22:50:17.284013   18056 docker.go:217] disabling cri-docker service (if available) ...
	I0803 22:50:17.284062   18056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 22:50:17.298506   18056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 22:50:17.312700   18056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 22:50:17.432566   18056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 22:50:17.570849   18056 docker.go:233] disabling docker service ...
	I0803 22:50:17.570917   18056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 22:50:17.594541   18056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 22:50:17.607432   18056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 22:50:17.747767   18056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 22:50:17.879504   18056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 22:50:17.893501   18056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 22:50:17.912529   18056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 22:50:17.912593   18056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:17.924139   18056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 22:50:17.924214   18056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:17.935611   18056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:17.947040   18056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:17.958472   18056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 22:50:17.970049   18056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:17.980667   18056 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:17.998432   18056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:18.009183   18056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 22:50:18.019004   18056 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 22:50:18.019069   18056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 22:50:18.032231   18056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 22:50:18.042602   18056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 22:50:18.170949   18056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 22:50:18.308146   18056 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 22:50:18.308239   18056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 22:50:18.313596   18056 start.go:563] Will wait 60s for crictl version
	I0803 22:50:18.313661   18056 ssh_runner.go:195] Run: which crictl
	I0803 22:50:18.317429   18056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 22:50:18.359076   18056 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 22:50:18.359185   18056 ssh_runner.go:195] Run: crio --version
	I0803 22:50:18.387177   18056 ssh_runner.go:195] Run: crio --version
	I0803 22:50:18.416931   18056 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 22:50:18.418436   18056 main.go:141] libmachine: (addons-110246) Calling .GetIP
	I0803 22:50:18.420808   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:18.421185   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:18.421213   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:18.421473   18056 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 22:50:18.425657   18056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 22:50:18.438848   18056 kubeadm.go:883] updating cluster {Name:addons-110246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-110246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 22:50:18.438985   18056 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 22:50:18.439046   18056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 22:50:18.475471   18056 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0803 22:50:18.475532   18056 ssh_runner.go:195] Run: which lz4
	I0803 22:50:18.479596   18056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0803 22:50:18.483843   18056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 22:50:18.483880   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0803 22:50:19.837485   18056 crio.go:462] duration metric: took 1.357914095s to copy over tarball
	I0803 22:50:19.837565   18056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 22:50:22.120178   18056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282588751s)
	I0803 22:50:22.120205   18056 crio.go:469] duration metric: took 2.28268959s to extract the tarball
	I0803 22:50:22.120215   18056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 22:50:22.159893   18056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 22:50:22.201653   18056 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 22:50:22.201673   18056 cache_images.go:84] Images are preloaded, skipping loading
	I0803 22:50:22.201680   18056 kubeadm.go:934] updating node { 192.168.39.9 8443 v1.30.3 crio true true} ...
	I0803 22:50:22.201773   18056 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-110246 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-110246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 22:50:22.201836   18056 ssh_runner.go:195] Run: crio config
	I0803 22:50:22.246998   18056 cni.go:84] Creating CNI manager for ""
	I0803 22:50:22.247016   18056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 22:50:22.247025   18056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 22:50:22.247046   18056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.9 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-110246 NodeName:addons-110246 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 22:50:22.247175   18056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-110246"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 22:50:22.247233   18056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 22:50:22.257291   18056 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 22:50:22.257392   18056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 22:50:22.267013   18056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0803 22:50:22.283284   18056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 22:50:22.299732   18056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0803 22:50:22.316271   18056 ssh_runner.go:195] Run: grep 192.168.39.9	control-plane.minikube.internal$ /etc/hosts
	I0803 22:50:22.320106   18056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 22:50:22.332424   18056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 22:50:22.452897   18056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 22:50:22.468995   18056 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246 for IP: 192.168.39.9
	I0803 22:50:22.469015   18056 certs.go:194] generating shared ca certs ...
	I0803 22:50:22.469037   18056 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.469175   18056 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 22:50:22.610747   18056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt ...
	I0803 22:50:22.610771   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt: {Name:mk3a8f2bd1a415d1c4e7cc2b5924aceda4b639bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.610940   18056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key ...
	I0803 22:50:22.610950   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key: {Name:mk942ac2ea6bb3e011a5fa7ccb5abff5050c5a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.611019   18056 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 22:50:22.692067   18056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt ...
	I0803 22:50:22.692093   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt: {Name:mk114239716c33003f0616228c77292e17d394d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.692241   18056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key ...
	I0803 22:50:22.692251   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key: {Name:mkd0f732a980ba94cb7bfc1d30ec645ce1f371fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.692313   18056 certs.go:256] generating profile certs ...
	I0803 22:50:22.692360   18056 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.key
	I0803 22:50:22.692373   18056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt with IP's: []
	I0803 22:50:22.837428   18056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt ...
	I0803 22:50:22.837454   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: {Name:mk0b1b89c09a545a9f4c16647029f90822cacb9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.837597   18056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.key ...
	I0803 22:50:22.837607   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.key: {Name:mk607996420dfafa0c43156c772bab34637203ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.837673   18056 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.key.543fb104
	I0803 22:50:22.837689   18056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.crt.543fb104 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.9]
	I0803 22:50:22.980827   18056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.crt.543fb104 ...
	I0803 22:50:22.980861   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.crt.543fb104: {Name:mk659af5911ae73a1adfafa14713ccf0169f6bdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.981064   18056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.key.543fb104 ...
	I0803 22:50:22.981084   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.key.543fb104: {Name:mkb18322c74bac3d280e0bf809afe98698fd7659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.981183   18056 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.crt.543fb104 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.crt
	I0803 22:50:22.981294   18056 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.key.543fb104 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.key
	I0803 22:50:22.981388   18056 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.key
	I0803 22:50:22.981410   18056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.crt with IP's: []
	I0803 22:50:23.095360   18056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.crt ...
	I0803 22:50:23.095389   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.crt: {Name:mk2e07e19d0d6c4415d3afa9e4978acd9676a5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:23.095565   18056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.key ...
	I0803 22:50:23.095579   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.key: {Name:mk6477cb65d8325d615c9080c80123d84b8d2dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:23.095765   18056 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 22:50:23.095815   18056 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 22:50:23.095848   18056 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 22:50:23.095882   18056 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 22:50:23.096465   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 22:50:23.121735   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 22:50:23.147400   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 22:50:23.206458   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 22:50:23.230738   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0803 22:50:23.260375   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 22:50:23.288146   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 22:50:23.315076   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0803 22:50:23.339413   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 22:50:23.364261   18056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 22:50:23.381470   18056 ssh_runner.go:195] Run: openssl version
	I0803 22:50:23.387256   18056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 22:50:23.398109   18056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 22:50:23.402996   18056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 22:50:23.403061   18056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 22:50:23.408966   18056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 22:50:23.420022   18056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 22:50:23.424337   18056 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 22:50:23.424397   18056 kubeadm.go:392] StartCluster: {Name:addons-110246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-110246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 22:50:23.424497   18056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 22:50:23.424552   18056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 22:50:23.462843   18056 cri.go:89] found id: ""
	I0803 22:50:23.462914   18056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 22:50:23.473433   18056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 22:50:23.483501   18056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 22:50:23.493273   18056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 22:50:23.493293   18056 kubeadm.go:157] found existing configuration files:
	
	I0803 22:50:23.493331   18056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0803 22:50:23.502606   18056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 22:50:23.502655   18056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 22:50:23.512139   18056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0803 22:50:23.521443   18056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 22:50:23.521497   18056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 22:50:23.530993   18056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0803 22:50:23.540069   18056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 22:50:23.540123   18056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 22:50:23.549428   18056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0803 22:50:23.558407   18056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 22:50:23.558475   18056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 22:50:23.567748   18056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 22:50:23.751245   18056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 22:50:33.775833   18056 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0803 22:50:33.775916   18056 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 22:50:33.775998   18056 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 22:50:33.776106   18056 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 22:50:33.776224   18056 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 22:50:33.776317   18056 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 22:50:33.777953   18056 out.go:204]   - Generating certificates and keys ...
	I0803 22:50:33.778052   18056 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 22:50:33.778138   18056 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 22:50:33.778240   18056 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0803 22:50:33.778338   18056 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0803 22:50:33.778422   18056 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0803 22:50:33.778488   18056 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0803 22:50:33.778567   18056 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0803 22:50:33.778734   18056 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-110246 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
	I0803 22:50:33.778820   18056 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0803 22:50:33.778988   18056 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-110246 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
	I0803 22:50:33.779088   18056 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0803 22:50:33.779187   18056 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0803 22:50:33.779258   18056 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0803 22:50:33.779343   18056 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 22:50:33.779416   18056 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 22:50:33.779505   18056 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0803 22:50:33.779586   18056 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 22:50:33.779673   18056 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 22:50:33.779750   18056 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 22:50:33.779884   18056 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 22:50:33.780011   18056 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 22:50:33.781399   18056 out.go:204]   - Booting up control plane ...
	I0803 22:50:33.781510   18056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 22:50:33.781581   18056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 22:50:33.781640   18056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 22:50:33.781738   18056 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 22:50:33.781828   18056 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 22:50:33.781871   18056 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 22:50:33.781997   18056 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0803 22:50:33.782102   18056 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0803 22:50:33.782183   18056 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.038703ms
	I0803 22:50:33.782275   18056 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0803 22:50:33.782363   18056 kubeadm.go:310] [api-check] The API server is healthy after 5.001822213s
	I0803 22:50:33.782481   18056 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 22:50:33.782646   18056 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 22:50:33.782729   18056 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 22:50:33.782954   18056 kubeadm.go:310] [mark-control-plane] Marking the node addons-110246 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 22:50:33.783031   18056 kubeadm.go:310] [bootstrap-token] Using token: 5bn30m.9lnl4t0eu1hcsdun
	I0803 22:50:33.784539   18056 out.go:204]   - Configuring RBAC rules ...
	I0803 22:50:33.784643   18056 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 22:50:33.784772   18056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 22:50:33.784920   18056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 22:50:33.785053   18056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 22:50:33.785228   18056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 22:50:33.785344   18056 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 22:50:33.785512   18056 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 22:50:33.785569   18056 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 22:50:33.785647   18056 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 22:50:33.785655   18056 kubeadm.go:310] 
	I0803 22:50:33.785729   18056 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 22:50:33.785740   18056 kubeadm.go:310] 
	I0803 22:50:33.785844   18056 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 22:50:33.785854   18056 kubeadm.go:310] 
	I0803 22:50:33.785904   18056 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 22:50:33.785985   18056 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 22:50:33.786056   18056 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 22:50:33.786066   18056 kubeadm.go:310] 
	I0803 22:50:33.786138   18056 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 22:50:33.786150   18056 kubeadm.go:310] 
	I0803 22:50:33.786189   18056 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 22:50:33.786199   18056 kubeadm.go:310] 
	I0803 22:50:33.786246   18056 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 22:50:33.786333   18056 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 22:50:33.786428   18056 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 22:50:33.786436   18056 kubeadm.go:310] 
	I0803 22:50:33.786545   18056 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 22:50:33.786621   18056 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 22:50:33.786628   18056 kubeadm.go:310] 
	I0803 22:50:33.786722   18056 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5bn30m.9lnl4t0eu1hcsdun \
	I0803 22:50:33.786873   18056 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0803 22:50:33.786917   18056 kubeadm.go:310] 	--control-plane 
	I0803 22:50:33.786925   18056 kubeadm.go:310] 
	I0803 22:50:33.787020   18056 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 22:50:33.787033   18056 kubeadm.go:310] 
	I0803 22:50:33.787132   18056 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5bn30m.9lnl4t0eu1hcsdun \
	I0803 22:50:33.787251   18056 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0803 22:50:33.787263   18056 cni.go:84] Creating CNI manager for ""
	I0803 22:50:33.787273   18056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 22:50:33.788702   18056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 22:50:33.789994   18056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 22:50:33.801248   18056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 22:50:33.820227   18056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 22:50:33.820319   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:33.820347   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-110246 minikube.k8s.io/updated_at=2024_08_03T22_50_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=addons-110246 minikube.k8s.io/primary=true
	I0803 22:50:33.942895   18056 ops.go:34] apiserver oom_adj: -16
	I0803 22:50:33.942962   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:34.443378   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:34.943961   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:35.443969   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:35.943075   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:36.443830   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:36.943319   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:37.443623   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:37.943307   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:38.443128   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:38.943962   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:39.443289   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:39.943316   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:40.443110   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:40.943415   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:41.443720   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:41.943021   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:42.443968   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:42.943900   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:43.443700   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:43.944013   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:44.443681   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:44.943834   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:45.443205   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:45.943593   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:46.443163   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:46.536987   18056 kubeadm.go:1113] duration metric: took 12.71673293s to wait for elevateKubeSystemPrivileges
	I0803 22:50:46.537024   18056 kubeadm.go:394] duration metric: took 23.112631323s to StartCluster
	I0803 22:50:46.537045   18056 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:46.537178   18056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 22:50:46.537652   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:46.537867   18056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0803 22:50:46.537886   18056 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 22:50:46.537953   18056 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0803 22:50:46.538044   18056 addons.go:69] Setting yakd=true in profile "addons-110246"
	I0803 22:50:46.538066   18056 addons.go:69] Setting inspektor-gadget=true in profile "addons-110246"
	I0803 22:50:46.538082   18056 addons.go:69] Setting metrics-server=true in profile "addons-110246"
	I0803 22:50:46.538100   18056 addons.go:234] Setting addon metrics-server=true in "addons-110246"
	I0803 22:50:46.538093   18056 addons.go:69] Setting gcp-auth=true in profile "addons-110246"
	I0803 22:50:46.538103   18056 config.go:182] Loaded profile config "addons-110246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 22:50:46.538114   18056 addons.go:69] Setting helm-tiller=true in profile "addons-110246"
	I0803 22:50:46.538124   18056 mustload.go:65] Loading cluster: addons-110246
	I0803 22:50:46.538130   18056 addons.go:234] Setting addon helm-tiller=true in "addons-110246"
	I0803 22:50:46.538135   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538106   18056 addons.go:234] Setting addon inspektor-gadget=true in "addons-110246"
	I0803 22:50:46.538161   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538165   18056 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-110246"
	I0803 22:50:46.538182   18056 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-110246"
	I0803 22:50:46.538204   18056 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-110246"
	I0803 22:50:46.538225   18056 addons.go:69] Setting registry=true in profile "addons-110246"
	I0803 22:50:46.538236   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538237   18056 addons.go:69] Setting cloud-spanner=true in profile "addons-110246"
	I0803 22:50:46.538258   18056 addons.go:234] Setting addon cloud-spanner=true in "addons-110246"
	I0803 22:50:46.538260   18056 addons.go:234] Setting addon registry=true in "addons-110246"
	I0803 22:50:46.538285   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538285   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538375   18056 addons.go:69] Setting volcano=true in profile "addons-110246"
	I0803 22:50:46.538402   18056 addons.go:234] Setting addon volcano=true in "addons-110246"
	I0803 22:50:46.538432   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538578   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538590   18056 addons.go:69] Setting ingress=true in profile "addons-110246"
	I0803 22:50:46.538593   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538608   18056 addons.go:69] Setting volumesnapshots=true in profile "addons-110246"
	I0803 22:50:46.538614   18056 addons.go:234] Setting addon ingress=true in "addons-110246"
	I0803 22:50:46.538615   18056 addons.go:69] Setting default-storageclass=true in profile "addons-110246"
	I0803 22:50:46.538625   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538632   18056 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-110246"
	I0803 22:50:46.538641   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538641   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538649   18056 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-110246"
	I0803 22:50:46.538662   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538230   18056 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-110246"
	I0803 22:50:46.538640   18056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-110246"
	I0803 22:50:46.538675   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538685   18056 addons.go:69] Setting ingress-dns=true in profile "addons-110246"
	I0803 22:50:46.538703   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538703   18056 addons.go:234] Setting addon ingress-dns=true in "addons-110246"
	I0803 22:50:46.538168   18056 addons.go:69] Setting storage-provisioner=true in profile "addons-110246"
	I0803 22:50:46.538077   18056 addons.go:234] Setting addon yakd=true in "addons-110246"
	I0803 22:50:46.538580   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538626   18056 addons.go:234] Setting addon volumesnapshots=true in "addons-110246"
	I0803 22:50:46.538759   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538765   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538742   18056 addons.go:234] Setting addon storage-provisioner=true in "addons-110246"
	I0803 22:50:46.538795   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538811   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538819   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538616   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538961   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538974   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538978   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538986   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.539022   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.539038   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.539080   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.539096   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.539145   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.539171   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.539267   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.539365   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.539388   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.539389   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.539416   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.539423   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.539433   18056 config.go:182] Loaded profile config "addons-110246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 22:50:46.539293   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.539485   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.539825   18056 out.go:177] * Verifying Kubernetes components...
	I0803 22:50:46.542803   18056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 22:50:46.559466   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0803 22:50:46.559862   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0803 22:50:46.559970   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40101
	I0803 22:50:46.560106   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.560113   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38417
	I0803 22:50:46.560325   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.560435   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.560543   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43849
	I0803 22:50:46.560783   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.560793   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.560901   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.560911   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.560962   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.561344   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.561451   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.561472   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.561502   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.561521   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.561576   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.561888   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.561921   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.562382   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.562433   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.569798   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.569846   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.570139   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.570182   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.571711   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.571738   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.571899   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43383
	I0803 22:50:46.571910   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.571931   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.572052   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.572539   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.572568   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.577758   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.577779   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.577872   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.577943   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.577965   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.579323   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.579329   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.579375   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.579853   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.580010   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.580051   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.580447   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.580486   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.605489   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0803 22:50:46.606239   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.606832   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.606866   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.607212   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.607825   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.607874   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.608086   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40405
	I0803 22:50:46.608103   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34991
	I0803 22:50:46.608667   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.609091   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.609108   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.609493   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.609733   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.610815   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.611518   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.611541   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.611971   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.612592   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.614182   18056 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-110246"
	I0803 22:50:46.614229   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.614430   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.614594   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.614631   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.615785   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33839
	I0803 22:50:46.616202   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.616654   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.616675   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.616914   18056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 22:50:46.616981   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.617143   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.617501   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34929
	I0803 22:50:46.617894   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.618700   18056 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 22:50:46.618720   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 22:50:46.618737   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.618773   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.619820   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.619842   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.620400   18056 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0803 22:50:46.620919   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.621447   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0803 22:50:46.621476   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.621862   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.622147   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.622303   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.622317   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.622627   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.622645   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.622917   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.623086   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.623119   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.623238   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.623515   18056 out.go:177]   - Using image docker.io/registry:2.8.3
	I0803 22:50:46.623656   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.623676   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.623964   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.624682   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0803 22:50:46.625118   18056 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0803 22:50:46.625134   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0803 22:50:46.625150   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.626301   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0803 22:50:46.626411   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40307
	I0803 22:50:46.627065   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.627154   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.627315   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33153
	I0803 22:50:46.627567   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.627579   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.627777   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.627793   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.627810   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.628761   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.628778   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.628801   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.628812   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36571
	I0803 22:50:46.628828   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.629892   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.629915   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.629916   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.629992   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.630291   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.630313   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.630345   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.630774   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.630791   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.630991   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.631024   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.631277   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.631652   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34199
	I0803 22:50:46.631823   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34661
	I0803 22:50:46.632032   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.632067   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.632301   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.632328   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.632349   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.632477   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.632710   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.632728   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.632783   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.632922   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.633084   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.633620   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.633793   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.634695   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.634848   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.634859   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.635153   18056 addons.go:234] Setting addon default-storageclass=true in "addons-110246"
	I0803 22:50:46.635185   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.635296   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.635418   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.635482   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I0803 22:50:46.635517   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.635546   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.635720   18056 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0803 22:50:46.636200   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.636232   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.636830   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.637050   18056 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0803 22:50:46.637067   18056 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0803 22:50:46.637092   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.637232   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.637244   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.637922   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.637988   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.638995   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.639020   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.639523   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0803 22:50:46.640688   18056 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0803 22:50:46.640707   18056 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0803 22:50:46.640727   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.640794   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.640823   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.640917   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.640942   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.640965   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.642181   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.642211   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.642253   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.642631   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.642796   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.642990   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.643586   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.644156   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.644185   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.644498   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.644952   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.644980   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.645189   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.645336   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.645514   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.645644   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.648045   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0803 22:50:46.648463   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.648954   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.648971   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.649333   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.649583   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.651699   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.653546   18056 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0803 22:50:46.654993   18056 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0803 22:50:46.655012   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0803 22:50:46.655029   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.658747   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39327
	I0803 22:50:46.659079   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.659172   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.659685   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.659702   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.660068   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.660118   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.660138   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.660329   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.660848   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.661087   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.661307   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.661385   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37653
	I0803 22:50:46.661689   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.661876   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.662462   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.662482   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.662797   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.662969   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.663096   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.664544   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.665069   18056 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0803 22:50:46.665075   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0803 22:50:46.665600   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.666105   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.666127   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.666168   18056 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0803 22:50:46.666225   18056 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0803 22:50:46.666248   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0803 22:50:46.666267   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.666484   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.666714   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.667226   18056 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0803 22:50:46.667246   18056 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0803 22:50:46.667274   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.669630   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
	I0803 22:50:46.670156   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43527
	I0803 22:50:46.670655   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.670674   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.670689   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.671052   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.671072   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.671237   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.671375   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.671511   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.671522   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.671720   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.671785   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.671836   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.671883   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.671898   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.671968   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.672499   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.672746   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.673123   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.673279   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.673291   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.673340   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.673439   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.673479   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.673478   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.673933   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.674106   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.674314   18056 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0803 22:50:46.675569   18056 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0803 22:50:46.675586   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0803 22:50:46.675601   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.676557   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.678749   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0803 22:50:46.679226   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.679832   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.679851   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.679888   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.680072   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.680218   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.680351   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.681837   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0803 22:50:46.683077   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0803 22:50:46.684840   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0803 22:50:46.686225   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0803 22:50:46.686913   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38611
	I0803 22:50:46.687461   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.688434   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36821
	I0803 22:50:46.688653   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0803 22:50:46.689061   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43111
	I0803 22:50:46.689659   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.689927   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.690022   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.689945   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.690223   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.690248   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.690370   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0803 22:50:46.690733   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.690755   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.690810   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.690870   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.690966   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.691010   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.691451   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.691467   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.691798   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0803 22:50:46.693110   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0803 22:50:46.693189   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.693561   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37391
	I0803 22:50:46.693566   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33447
	I0803 22:50:46.693587   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.693648   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.694000   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.694065   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.694468   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.694790   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.694871   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0803 22:50:46.694889   18056 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0803 22:50:46.694924   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.695255   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.695274   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.695391   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.695406   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.695801   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.695839   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.695861   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.695840   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.695884   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.696018   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.696065   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.696187   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:46.696199   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:46.696611   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:46.696642   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:46.696650   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:46.696659   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:46.696666   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:46.696928   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:46.696939   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:46.696950   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	W0803 22:50:46.697144   18056 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0803 22:50:46.697965   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.698032   18056 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0803 22:50:46.699011   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.699241   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.699709   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.699730   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.699877   18056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0803 22:50:46.699951   18056 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0803 22:50:46.700277   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0803 22:50:46.700296   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.700004   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.700506   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.700696   18056 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0803 22:50:46.700772   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.702257   18056 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0803 22:50:46.702274   18056 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0803 22:50:46.702294   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.702369   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.702909   18056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0803 22:50:46.704252   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.704277   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.704293   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.704319   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0803 22:50:46.704437   18056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0803 22:50:46.704568   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.704753   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.704836   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.704897   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.705019   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.706031   18056 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0803 22:50:46.706114   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0803 22:50:46.706133   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.706321   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.706409   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.706428   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.706694   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.706714   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.707067   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.707229   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.707375   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.707506   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.707942   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.708422   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.708948   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.709451   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.709468   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	W0803 22:50:46.710030   18056 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54746->192.168.39.9:22: read: connection reset by peer
	I0803 22:50:46.710055   18056 retry.go:31] will retry after 309.264282ms: ssh: handshake failed: read tcp 192.168.39.1:54746->192.168.39.9:22: read: connection reset by peer
	I0803 22:50:46.710094   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.710199   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.710377   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.710396   18056 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 22:50:46.710407   18056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 22:50:46.710420   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.710563   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.710698   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.713100   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.713502   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.713521   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.713678   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.713817   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.713922   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.714023   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.718794   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I0803 22:50:46.719162   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.719576   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.719597   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.719945   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.720137   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.721547   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.723473   18056 out.go:177]   - Using image docker.io/busybox:stable
	I0803 22:50:46.724748   18056 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0803 22:50:46.725945   18056 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0803 22:50:46.725957   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0803 22:50:46.725972   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.729224   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.729637   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.729666   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.729795   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.730001   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.730217   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.730373   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:47.031046   18056 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0803 22:50:47.031069   18056 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0803 22:50:47.050327   18056 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0803 22:50:47.050344   18056 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0803 22:50:47.115233   18056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0803 22:50:47.115271   18056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 22:50:47.123103   18056 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0803 22:50:47.123123   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0803 22:50:47.132170   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0803 22:50:47.138242   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 22:50:47.190948   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0803 22:50:47.191778   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0803 22:50:47.260702   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0803 22:50:47.304212   18056 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0803 22:50:47.304247   18056 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0803 22:50:47.322792   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0803 22:50:47.322828   18056 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0803 22:50:47.322851   18056 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0803 22:50:47.322872   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0803 22:50:47.341924   18056 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0803 22:50:47.341950   18056 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0803 22:50:47.356450   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0803 22:50:47.359303   18056 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0803 22:50:47.359327   18056 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0803 22:50:47.401479   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 22:50:47.407700   18056 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0803 22:50:47.407730   18056 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0803 22:50:47.559239   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0803 22:50:47.559271   18056 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0803 22:50:47.585644   18056 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0803 22:50:47.585675   18056 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0803 22:50:47.590738   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0803 22:50:47.593508   18056 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0803 22:50:47.593527   18056 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0803 22:50:47.623248   18056 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 22:50:47.623275   18056 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0803 22:50:47.663407   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0803 22:50:47.756646   18056 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0803 22:50:47.756674   18056 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0803 22:50:47.801339   18056 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0803 22:50:47.801385   18056 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0803 22:50:47.816655   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0803 22:50:47.816680   18056 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0803 22:50:47.839394   18056 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0803 22:50:47.839419   18056 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0803 22:50:47.903125   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 22:50:47.954230   18056 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0803 22:50:47.954263   18056 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0803 22:50:48.012148   18056 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0803 22:50:48.012179   18056 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0803 22:50:48.067624   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0803 22:50:48.067656   18056 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0803 22:50:48.106363   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0803 22:50:48.106399   18056 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0803 22:50:48.226957   18056 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0803 22:50:48.226988   18056 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0803 22:50:48.359269   18056 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0803 22:50:48.359296   18056 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0803 22:50:48.429013   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0803 22:50:48.429045   18056 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0803 22:50:48.437518   18056 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0803 22:50:48.437545   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0803 22:50:48.738562   18056 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0803 22:50:48.738593   18056 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0803 22:50:48.803239   18056 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0803 22:50:48.803262   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0803 22:50:48.861268   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0803 22:50:48.891643   18056 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0803 22:50:48.891670   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0803 22:50:49.103985   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0803 22:50:49.139682   18056 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0803 22:50:49.139702   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0803 22:50:49.226850   18056 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0803 22:50:49.226884   18056 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0803 22:50:49.347390   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0803 22:50:49.521310   18056 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0803 22:50:49.521343   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0803 22:50:49.577054   18056 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.461783796s)
	I0803 22:50:49.577095   18056 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.461796975s)
	I0803 22:50:49.577097   18056 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0803 22:50:49.578077   18056 node_ready.go:35] waiting up to 6m0s for node "addons-110246" to be "Ready" ...
	I0803 22:50:49.581337   18056 node_ready.go:49] node "addons-110246" has status "Ready":"True"
	I0803 22:50:49.581378   18056 node_ready.go:38] duration metric: took 3.257535ms for node "addons-110246" to be "Ready" ...
	I0803 22:50:49.581390   18056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 22:50:49.596429   18056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8hx7t" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:49.805408   18056 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0803 22:50:49.805432   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0803 22:50:50.102281   18056 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-110246" context rescaled to 1 replicas
	I0803 22:50:50.195404   18056 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0803 22:50:50.195434   18056 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0803 22:50:50.270052   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.137843129s)
	I0803 22:50:50.270106   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:50.270119   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:50.270453   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:50.270466   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:50.270474   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:50.270488   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:50.270496   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:50.270724   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:50.270759   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:50.432455   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0803 22:50:51.658760   18056 pod_ready.go:102] pod "coredns-7db6d8ff4d-8hx7t" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:51.886465   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.748182787s)
	I0803 22:50:51.886524   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:51.886540   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:51.887388   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:51.887406   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:51.887428   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:51.887437   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:51.887673   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:51.887691   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:52.648319   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.45733094s)
	I0803 22:50:52.648368   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:52.648381   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:52.648382   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.456551146s)
	I0803 22:50:52.648425   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:52.648441   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:52.648622   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:52.648640   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:52.648650   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:52.648663   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:52.648683   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:52.648726   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:52.648748   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:52.648755   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:52.648764   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:52.648771   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:52.648895   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:52.648908   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:52.649132   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:52.649157   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:52.649177   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:52.667778   18056 pod_ready.go:92] pod "coredns-7db6d8ff4d-8hx7t" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:52.667801   18056 pod_ready.go:81] duration metric: took 3.071339238s for pod "coredns-7db6d8ff4d-8hx7t" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.667814   18056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hbp7b" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.699843   18056 pod_ready.go:92] pod "coredns-7db6d8ff4d-hbp7b" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:52.699864   18056 pod_ready.go:81] duration metric: took 32.042317ms for pod "coredns-7db6d8ff4d-hbp7b" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.699876   18056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.701103   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:52.701123   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:52.701397   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:52.701446   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:52.701458   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:52.721977   18056 pod_ready.go:92] pod "etcd-addons-110246" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:52.722000   18056 pod_ready.go:81] duration metric: took 22.116795ms for pod "etcd-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.722013   18056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.728394   18056 pod_ready.go:92] pod "kube-apiserver-addons-110246" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:52.728415   18056 pod_ready.go:81] duration metric: took 6.393731ms for pod "kube-apiserver-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.728426   18056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.735430   18056 pod_ready.go:92] pod "kube-controller-manager-addons-110246" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:52.735452   18056 pod_ready.go:81] duration metric: took 7.018737ms for pod "kube-controller-manager-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.735463   18056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lfl9m" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:53.013986   18056 pod_ready.go:92] pod "kube-proxy-lfl9m" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:53.014007   18056 pod_ready.go:81] duration metric: took 278.536554ms for pod "kube-proxy-lfl9m" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:53.014016   18056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:53.401659   18056 pod_ready.go:92] pod "kube-scheduler-addons-110246" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:53.401692   18056 pod_ready.go:81] duration metric: took 387.668627ms for pod "kube-scheduler-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:53.401706   18056 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:53.703925   18056 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0803 22:50:53.703968   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:53.707094   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:53.707557   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:53.707585   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:53.707772   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:53.707993   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:53.708176   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:53.708364   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:54.110346   18056 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0803 22:50:54.250832   18056 addons.go:234] Setting addon gcp-auth=true in "addons-110246"
	I0803 22:50:54.250901   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:54.251355   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:54.251401   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:54.266999   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0803 22:50:54.267436   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:54.267969   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:54.267987   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:54.268319   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:54.268909   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:54.268940   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:54.284039   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44385
	I0803 22:50:54.284489   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:54.284961   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:54.284981   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:54.285339   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:54.285529   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:54.287242   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:54.287464   18056 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0803 22:50:54.287485   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:54.290517   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:54.290985   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:54.291012   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:54.291199   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:54.291364   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:54.291525   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:54.291653   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:55.430178   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:55.930942   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.574454564s)
	I0803 22:50:55.930991   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.930997   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.529485031s)
	I0803 22:50:55.931007   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.670270203s)
	I0803 22:50:55.931003   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931047   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931066   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931073   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.340308726s)
	I0803 22:50:55.931091   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931100   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931036   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931159   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931163   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.267719265s)
	I0803 22:50:55.931220   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931230   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931253   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.028094381s)
	I0803 22:50:55.931271   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931280   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931411   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.070107313s)
	I0803 22:50:55.931436   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	W0803 22:50:55.931438   18056 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0803 22:50:55.931457   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.931458   18056 retry.go:31] will retry after 178.800834ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0803 22:50:55.931486   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.931494   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.931502   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931509   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931511   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.827498503s)
	I0803 22:50:55.931529   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931538   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931604   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.931626   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.931647   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.931658   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.931672   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.931672   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.931680   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931683   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931687   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931691   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931736   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.931656   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.584223067s)
	I0803 22:50:55.931758   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.931765   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.931765   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931772   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931776   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931779   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931820   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.931838   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.931845   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.931932   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.932020   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.932043   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.932053   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.932053   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.932062   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.932065   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933220   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.933248   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933256   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933265   18056 addons.go:475] Verifying addon metrics-server=true in "addons-110246"
	I0803 22:50:55.933310   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.933333   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933340   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933348   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.933371   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.933665   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.933699   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933709   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933718   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933726   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.933743   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.933710   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933784   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.933790   18056 addons.go:475] Verifying addon registry=true in "addons-110246"
	I0803 22:50:55.933808   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933818   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933923   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.933946   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933953   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933974   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933983   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933993   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.934005   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.934257   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.934292   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.934304   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.934882   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.934920   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.934927   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.934935   18056 addons.go:475] Verifying addon ingress=true in "addons-110246"
	I0803 22:50:55.935597   18056 out.go:177] * Verifying registry addon...
	I0803 22:50:55.936797   18056 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-110246 service yakd-dashboard -n yakd-dashboard
	
	I0803 22:50:55.936822   18056 out.go:177] * Verifying ingress addon...
	I0803 22:50:55.938286   18056 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0803 22:50:55.939438   18056 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0803 22:50:55.953898   18056 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0803 22:50:55.953921   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:55.956875   18056 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0803 22:50:55.956897   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:55.961832   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.961865   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.962124   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.962139   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:56.110743   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0803 22:50:56.443357   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:56.444903   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:56.945230   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:56.958294   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:57.159217   18056 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.871728882s)
	I0803 22:50:57.159213   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.726696805s)
	I0803 22:50:57.159394   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:57.159412   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:57.159650   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:57.159667   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:57.159678   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:57.159686   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:57.160689   18056 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0803 22:50:57.161415   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:57.161423   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:57.161438   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:57.161453   18056 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-110246"
	I0803 22:50:57.163354   18056 out.go:177] * Verifying csi-hostpath-driver addon...
	I0803 22:50:57.163354   18056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0803 22:50:57.164519   18056 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0803 22:50:57.164538   18056 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0803 22:50:57.165159   18056 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0803 22:50:57.197180   18056 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0803 22:50:57.197208   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:57.234262   18056 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0803 22:50:57.234287   18056 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0803 22:50:57.324594   18056 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0803 22:50:57.324620   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0803 22:50:57.395096   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0803 22:50:57.443656   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:57.445903   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:57.714708   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:57.908366   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:57.945198   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:57.945437   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:58.074055   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.963259971s)
	I0803 22:50:58.074112   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:58.074125   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:58.074459   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:58.074498   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:58.074512   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:58.074520   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:58.074726   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:58.074767   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:58.074784   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:58.171648   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:58.443050   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:58.444813   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:58.678164   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:58.868756   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.473604857s)
	I0803 22:50:58.868836   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:58.868852   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:58.869151   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:58.869217   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:58.869180   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:58.869233   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:58.869242   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:58.869480   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:58.869494   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:58.869515   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:58.871488   18056 addons.go:475] Verifying addon gcp-auth=true in "addons-110246"
	I0803 22:50:58.874374   18056 out.go:177] * Verifying gcp-auth addon...
	I0803 22:50:58.876478   18056 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0803 22:50:58.891057   18056 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0803 22:50:58.891079   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:50:58.943318   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:58.945270   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:59.172976   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:59.382909   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:50:59.444544   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:59.446049   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:59.670963   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:59.880922   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:50:59.916852   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:59.946508   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:59.947434   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:00.170483   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:00.380361   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:00.444868   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:00.445270   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:00.671773   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:00.880567   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:00.944374   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:00.944759   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:01.171397   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:01.380229   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:01.444835   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:01.445094   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:01.671142   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:01.880717   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:01.945407   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:01.946233   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:02.173727   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:02.379998   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:02.408031   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:51:02.446678   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:02.451003   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:02.671939   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:02.880704   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:02.944023   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:02.946132   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:03.171420   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:03.380794   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:03.444828   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:03.447477   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:03.738368   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:03.880818   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:03.943101   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:03.944533   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:04.178230   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:04.381557   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:04.443509   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:04.446659   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:04.671441   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:04.880373   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:04.908281   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:51:04.945103   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:04.947414   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:05.171653   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:05.380028   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:05.444644   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:05.446622   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:05.670339   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:05.880352   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:05.944035   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:05.945587   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:06.172517   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:06.380781   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:06.442908   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:06.442969   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:06.671164   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:06.880314   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:06.943426   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:06.943723   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:07.170796   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:07.380760   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:07.408522   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:51:07.442876   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:07.442986   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:07.670980   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:07.881280   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:07.948026   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:07.948170   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:08.171056   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:08.380665   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:08.445539   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:08.446474   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:08.671521   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:08.880814   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:08.943490   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:08.945006   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:09.175147   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:09.380254   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:09.443061   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:09.444284   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:09.671573   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:09.879670   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:09.911373   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:51:09.945204   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:09.945214   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:10.171583   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:10.380744   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:10.443998   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:10.445624   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:10.670168   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:10.879616   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:10.945813   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:10.946060   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:11.172277   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:11.381588   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:11.443133   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:11.448632   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:11.670560   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:11.880655   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:11.944353   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:11.944874   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:12.170198   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:12.380416   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:12.408927   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:51:12.443311   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:12.443377   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:12.671638   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:12.880116   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:12.952112   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:12.952545   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:13.171647   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:13.380820   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:13.444649   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:13.444991   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:13.671461   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:13.880988   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:14.149800   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:14.151050   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:14.170512   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:14.380151   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:14.444861   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:14.450098   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:14.670776   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:14.880614   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:14.907930   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:51:14.944266   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:14.953090   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:15.172676   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:15.380865   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:15.409969   18056 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"True"
	I0803 22:51:15.409995   18056 pod_ready.go:81] duration metric: took 22.008279196s for pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace to be "Ready" ...
	I0803 22:51:15.410003   18056 pod_ready.go:38] duration metric: took 25.828596629s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 22:51:15.410018   18056 api_server.go:52] waiting for apiserver process to appear ...
	I0803 22:51:15.410063   18056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 22:51:15.430878   18056 api_server.go:72] duration metric: took 28.892957799s to wait for apiserver process to appear ...
	I0803 22:51:15.430919   18056 api_server.go:88] waiting for apiserver healthz status ...
	I0803 22:51:15.430943   18056 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I0803 22:51:15.435043   18056 api_server.go:279] https://192.168.39.9:8443/healthz returned 200:
	ok
	I0803 22:51:15.436193   18056 api_server.go:141] control plane version: v1.30.3
	I0803 22:51:15.436213   18056 api_server.go:131] duration metric: took 5.28654ms to wait for apiserver health ...
	I0803 22:51:15.436219   18056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 22:51:15.443740   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:15.444137   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:15.444616   18056 system_pods.go:59] 18 kube-system pods found
	I0803 22:51:15.444637   18056 system_pods.go:61] "coredns-7db6d8ff4d-hbp7b" [f9309e8e-3027-46d2-b989-2f285fcf10f4] Running
	I0803 22:51:15.444646   18056 system_pods.go:61] "csi-hostpath-attacher-0" [d5c3e8a0-1571-4ee3-a3cb-c726b1bddccb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0803 22:51:15.444669   18056 system_pods.go:61] "csi-hostpath-resizer-0" [aa05ea21-0c03-4cc5-ba5d-4ef7dcce50b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0803 22:51:15.444682   18056 system_pods.go:61] "csi-hostpathplugin-cnwdb" [8d4d7011-2902-48df-a117-b7afc2e94916] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0803 22:51:15.444688   18056 system_pods.go:61] "etcd-addons-110246" [9586d714-7768-4e6c-93c7-1525119eef59] Running
	I0803 22:51:15.444694   18056 system_pods.go:61] "kube-apiserver-addons-110246" [5c7dc265-a4c7-4dfb-919d-cf428fcf1674] Running
	I0803 22:51:15.444698   18056 system_pods.go:61] "kube-controller-manager-addons-110246" [d568c53e-2834-4902-888b-b1627f65e978] Running
	I0803 22:51:15.444704   18056 system_pods.go:61] "kube-ingress-dns-minikube" [6a3fbc83-11d9-435d-87e5-1a494cf8c714] Running
	I0803 22:51:15.444707   18056 system_pods.go:61] "kube-proxy-lfl9m" [77bd9bb9-4577-4a8c-bdd2-970a32e4467b] Running
	I0803 22:51:15.444711   18056 system_pods.go:61] "kube-scheduler-addons-110246" [bbba425e-6b27-4154-81f2-3e80e941f607] Running
	I0803 22:51:15.444717   18056 system_pods.go:61] "metrics-server-c59844bb4-wbhpt" [bb904756-9056-4069-b53b-b35f8c0bde90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0803 22:51:15.444724   18056 system_pods.go:61] "nvidia-device-plugin-daemonset-f6gv6" [5d7278f7-553b-40c0-a2b4-059ba877ae75] Running
	I0803 22:51:15.444730   18056 system_pods.go:61] "registry-698f998955-4bhmt" [d9661cee-e4cd-468d-a421-0e709c62e138] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0803 22:51:15.444737   18056 system_pods.go:61] "registry-proxy-4sg2g" [df0da2d6-2cf2-471c-9b29-c471d61d67b5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0803 22:51:15.444746   18056 system_pods.go:61] "snapshot-controller-745499f584-8t6hx" [66934af4-c7e5-4ec2-a4c0-983cc9acc894] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0803 22:51:15.444758   18056 system_pods.go:61] "snapshot-controller-745499f584-pgmqb" [610d2e0a-47ed-4aa1-b767-2701c23b6276] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0803 22:51:15.444768   18056 system_pods.go:61] "storage-provisioner" [4abb12c4-8b99-40af-8da9-1f36ecb668a0] Running
	I0803 22:51:15.444779   18056 system_pods.go:61] "tiller-deploy-6677d64bcd-zv5cc" [479ff6dd-8760-4dec-8f87-d1236801993f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0803 22:51:15.444787   18056 system_pods.go:74] duration metric: took 8.562356ms to wait for pod list to return data ...
	I0803 22:51:15.444796   18056 default_sa.go:34] waiting for default service account to be created ...
	I0803 22:51:15.446434   18056 default_sa.go:45] found service account: "default"
	I0803 22:51:15.446448   18056 default_sa.go:55] duration metric: took 1.646743ms for default service account to be created ...
	I0803 22:51:15.446454   18056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 22:51:15.454470   18056 system_pods.go:86] 18 kube-system pods found
	I0803 22:51:15.454488   18056 system_pods.go:89] "coredns-7db6d8ff4d-hbp7b" [f9309e8e-3027-46d2-b989-2f285fcf10f4] Running
	I0803 22:51:15.454496   18056 system_pods.go:89] "csi-hostpath-attacher-0" [d5c3e8a0-1571-4ee3-a3cb-c726b1bddccb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0803 22:51:15.454504   18056 system_pods.go:89] "csi-hostpath-resizer-0" [aa05ea21-0c03-4cc5-ba5d-4ef7dcce50b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0803 22:51:15.454512   18056 system_pods.go:89] "csi-hostpathplugin-cnwdb" [8d4d7011-2902-48df-a117-b7afc2e94916] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0803 22:51:15.454519   18056 system_pods.go:89] "etcd-addons-110246" [9586d714-7768-4e6c-93c7-1525119eef59] Running
	I0803 22:51:15.454525   18056 system_pods.go:89] "kube-apiserver-addons-110246" [5c7dc265-a4c7-4dfb-919d-cf428fcf1674] Running
	I0803 22:51:15.454531   18056 system_pods.go:89] "kube-controller-manager-addons-110246" [d568c53e-2834-4902-888b-b1627f65e978] Running
	I0803 22:51:15.454536   18056 system_pods.go:89] "kube-ingress-dns-minikube" [6a3fbc83-11d9-435d-87e5-1a494cf8c714] Running
	I0803 22:51:15.454542   18056 system_pods.go:89] "kube-proxy-lfl9m" [77bd9bb9-4577-4a8c-bdd2-970a32e4467b] Running
	I0803 22:51:15.454547   18056 system_pods.go:89] "kube-scheduler-addons-110246" [bbba425e-6b27-4154-81f2-3e80e941f607] Running
	I0803 22:51:15.454553   18056 system_pods.go:89] "metrics-server-c59844bb4-wbhpt" [bb904756-9056-4069-b53b-b35f8c0bde90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0803 22:51:15.454560   18056 system_pods.go:89] "nvidia-device-plugin-daemonset-f6gv6" [5d7278f7-553b-40c0-a2b4-059ba877ae75] Running
	I0803 22:51:15.454567   18056 system_pods.go:89] "registry-698f998955-4bhmt" [d9661cee-e4cd-468d-a421-0e709c62e138] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0803 22:51:15.454575   18056 system_pods.go:89] "registry-proxy-4sg2g" [df0da2d6-2cf2-471c-9b29-c471d61d67b5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0803 22:51:15.454584   18056 system_pods.go:89] "snapshot-controller-745499f584-8t6hx" [66934af4-c7e5-4ec2-a4c0-983cc9acc894] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0803 22:51:15.454592   18056 system_pods.go:89] "snapshot-controller-745499f584-pgmqb" [610d2e0a-47ed-4aa1-b767-2701c23b6276] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0803 22:51:15.454599   18056 system_pods.go:89] "storage-provisioner" [4abb12c4-8b99-40af-8da9-1f36ecb668a0] Running
	I0803 22:51:15.454604   18056 system_pods.go:89] "tiller-deploy-6677d64bcd-zv5cc" [479ff6dd-8760-4dec-8f87-d1236801993f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0803 22:51:15.454612   18056 system_pods.go:126] duration metric: took 8.152871ms to wait for k8s-apps to be running ...
	I0803 22:51:15.454618   18056 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 22:51:15.454659   18056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 22:51:15.469466   18056 system_svc.go:56] duration metric: took 14.837376ms WaitForService to wait for kubelet
	I0803 22:51:15.469491   18056 kubeadm.go:582] duration metric: took 28.931575342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 22:51:15.469512   18056 node_conditions.go:102] verifying NodePressure condition ...
	I0803 22:51:15.472479   18056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 22:51:15.472499   18056 node_conditions.go:123] node cpu capacity is 2
	I0803 22:51:15.472510   18056 node_conditions.go:105] duration metric: took 2.994661ms to run NodePressure ...
	I0803 22:51:15.472520   18056 start.go:241] waiting for startup goroutines ...
	I0803 22:51:15.670226   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:15.880710   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:15.944092   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:15.944634   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:16.171096   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:16.380721   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:16.443621   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:16.446055   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:16.671582   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:16.881429   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:16.944723   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:16.945537   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:17.174128   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:17.379968   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:17.445312   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:17.448021   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:17.672495   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:17.879768   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:17.947702   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:17.948318   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:18.171521   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:18.381612   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:18.444221   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:18.444696   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:18.671112   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:18.881086   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:18.943557   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:18.943978   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:19.170314   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:19.381559   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:19.443775   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:19.443880   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:19.670685   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:19.880371   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:19.943517   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:19.943888   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:20.170936   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:20.380367   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:20.444036   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:20.444134   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:20.671689   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:20.881325   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:20.944047   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:20.945881   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:21.170378   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:21.380649   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:21.443493   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:21.445759   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:21.671372   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:21.880241   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:21.944349   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:21.944882   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:22.171456   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:22.380684   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:22.443864   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:22.445048   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:22.671328   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:22.881570   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:22.942335   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:22.943889   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:23.170910   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:23.380906   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:23.443352   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:23.445363   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:23.671046   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:23.880488   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:23.944271   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:23.945887   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:24.170180   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:24.381207   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:24.442912   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:24.443937   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:24.671015   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:24.880204   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:24.943725   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:24.943956   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:25.170319   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:25.380785   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:25.443041   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:25.444632   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:25.671627   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:25.879711   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:25.943677   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:25.955507   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:26.171297   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:26.424372   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:26.447844   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:26.447973   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:26.670428   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:26.882640   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:26.942681   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:26.943923   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:27.170052   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:27.383142   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:27.443297   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:27.444625   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:27.670720   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:27.880651   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:27.942864   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:27.944726   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:28.170075   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:28.380352   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:28.444854   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:28.445459   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:28.677213   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:28.881527   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:28.944312   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:28.944418   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:29.170384   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:29.380907   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:29.442963   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:29.445375   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:29.671473   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:29.879895   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:29.943245   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:29.945165   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:30.170635   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:30.379408   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:30.444771   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:30.445186   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:30.671615   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:31.052180   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:31.052632   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:31.053298   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:31.171471   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:31.380565   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:31.442670   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:31.444672   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:31.670655   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:31.880766   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:31.945195   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:31.947153   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:32.170445   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:32.381646   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:32.443114   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:32.444427   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:32.671253   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:32.879758   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:32.943419   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:32.943637   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:33.171460   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:33.380084   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:33.443598   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:33.443710   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:33.679753   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:33.880092   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:33.944647   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:33.946403   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:34.171348   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:34.381106   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:34.445403   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:34.445619   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:35.061412   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:35.064619   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:35.065104   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:35.065216   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:35.170678   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:35.380316   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:35.444411   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:35.446330   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:35.674539   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:35.879919   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:35.943063   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:35.944180   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:36.170594   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:36.380326   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:36.444695   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:36.445002   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:36.680922   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:36.880854   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:36.942980   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:36.943841   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:37.170792   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:37.380611   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:37.443247   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:37.443756   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:37.677827   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:37.880245   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:37.943557   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:37.945405   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:38.173182   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:38.380257   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:38.444590   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:38.444609   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:38.670997   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:38.880401   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:38.944555   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:38.945064   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:39.171635   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:39.380099   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:39.443087   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:39.446292   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:39.670880   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:39.879961   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:39.943683   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:39.945213   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:40.170459   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:40.380660   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:40.443733   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:40.444446   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:40.671064   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:40.880070   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:40.945441   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:40.945765   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:41.389730   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:41.394067   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:41.447475   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:41.450661   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:41.670823   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:41.880744   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:41.943951   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:41.944289   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:42.170778   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:42.381467   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:42.443877   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:42.446564   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:42.671207   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:42.880358   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:42.945511   18056 kapi.go:107] duration metric: took 47.007220848s to wait for kubernetes.io/minikube-addons=registry ...
	I0803 22:51:42.945780   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:43.169944   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:43.380178   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:43.444283   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:43.671691   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:43.879993   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:43.943647   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:44.171070   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:44.380998   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:44.446180   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:44.670377   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:44.880196   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:44.947182   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:45.170566   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:45.380448   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:45.445386   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:45.671217   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:45.880913   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:45.943771   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:46.170368   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:46.380245   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:46.443961   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:46.671330   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:46.880504   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:46.946939   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:47.174028   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:47.380537   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:47.444627   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:47.671875   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:47.880309   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:47.944939   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:48.191662   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:48.380321   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:48.444020   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:48.673131   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:48.880707   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:48.944505   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:49.171930   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:49.381395   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:49.443690   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:49.670210   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:49.879574   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:49.945606   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:50.171479   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:50.380415   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:50.444854   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:50.671587   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:50.881894   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:50.943672   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:51.170541   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:51.380751   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:51.443742   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:51.670692   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:51.879809   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:51.943694   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:52.170849   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:52.380823   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:52.444069   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:52.673630   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:52.879977   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:52.943806   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:53.171768   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:53.380336   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:53.445404   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:53.671813   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:53.880146   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:53.944079   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:54.170924   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:54.380726   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:54.443511   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:54.671307   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:54.880260   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:54.943733   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:55.170330   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:55.380604   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:55.444871   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:55.843456   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:55.881135   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:55.944524   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:56.171969   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:56.380477   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:56.445059   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:56.669969   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:56.879976   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:56.951692   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:57.170771   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:57.380611   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:57.444440   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:57.671010   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:57.881241   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:57.944211   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:58.170744   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:58.380504   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:58.444439   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:58.671388   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:58.880374   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:58.944348   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:59.171176   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:59.379842   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:59.444538   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:59.671019   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:59.887138   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:59.946427   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:00.171339   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:00.380449   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:00.444113   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:00.671272   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:00.881077   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:00.943639   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:01.170842   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:01.380410   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:01.444300   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:01.670535   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:01.879921   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:01.943696   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:02.172403   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:02.380747   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:02.444412   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:03.026190   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:03.030780   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:03.031222   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:03.171206   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:03.380662   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:03.446422   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:03.671096   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:03.881727   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:03.946269   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:04.177770   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:04.380081   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:04.450682   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:04.682353   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:04.891611   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:04.945520   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:05.171287   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:05.380377   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:05.445041   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:05.671340   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:05.880350   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:05.945248   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:06.175882   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:06.382941   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:06.444218   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:06.670484   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:06.880988   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:06.943918   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:07.171535   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:07.380250   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:07.452156   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:07.670406   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:07.881102   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:07.944375   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:08.171051   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:08.380930   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:08.444403   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:08.671184   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:08.880751   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:08.943799   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:09.170984   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:09.384160   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:09.444297   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:09.670161   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:09.883199   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:09.946918   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:10.171473   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:10.379854   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:10.444634   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:10.671481   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:10.880741   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:10.943801   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:11.171793   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:11.380232   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:11.447603   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:11.670600   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:11.881032   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:11.944006   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:12.170590   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:12.381052   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:12.444805   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:12.670098   18056 kapi.go:107] duration metric: took 1m15.50493638s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0803 22:52:12.881871   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:12.944445   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:13.380991   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:13.444048   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:13.880339   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:13.943752   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:14.380155   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:14.444490   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:14.881135   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:14.944280   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:15.380462   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:15.444654   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:15.881076   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:15.944761   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:16.380127   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:16.444361   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:16.880470   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:16.944621   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:17.381749   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:17.444591   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:17.881460   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:17.945385   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:18.381014   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:18.444744   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:18.881226   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:18.944836   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:19.381300   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:19.445472   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:19.880874   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:19.944627   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:20.380794   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:20.444092   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:20.880499   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:20.947760   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:21.380534   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:21.444533   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:21.880365   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:21.944077   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:22.381039   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:22.444615   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:22.881083   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:22.944541   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:23.380537   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:23.444424   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:23.882483   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:23.944530   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:24.381248   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:24.444539   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:24.880706   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:24.945111   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:25.380141   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:25.445954   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:25.880073   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:25.943655   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:26.380853   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:26.444099   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:26.880314   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:26.944346   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:27.380660   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:27.443867   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:27.881071   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:27.943831   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:28.380622   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:28.447435   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:28.880272   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:28.944704   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:29.381226   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:29.445035   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:29.880041   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:29.943926   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:30.381788   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:30.443917   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:30.881739   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:30.943889   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:31.379975   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:31.443944   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:31.880080   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:31.944016   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:32.380697   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:32.444063   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:32.880185   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:32.944584   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:33.381822   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:33.444441   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:33.880892   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:33.944008   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:34.380357   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:34.444481   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:34.881603   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:34.944717   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:35.382252   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:35.445026   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:35.880736   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:35.944025   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:36.380129   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:36.444106   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:36.880419   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:36.944183   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:37.382034   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:37.444467   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:37.880806   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:37.944053   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:38.380242   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:38.444733   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:38.879863   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:38.944464   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:39.380948   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:39.444563   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:39.881281   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:39.944236   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:40.380585   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:40.445194   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:40.880249   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:40.944414   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:41.380393   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:41.445861   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:41.880685   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:41.943770   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:42.380916   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:42.444493   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:42.881179   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:42.944164   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:43.380095   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:43.444785   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:43.879887   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:43.944213   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:44.381820   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:44.443861   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:44.881250   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:44.944102   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:45.379962   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:45.444312   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:45.880619   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:45.945526   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:46.381667   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:46.445025   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:46.880484   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:46.944711   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:47.380234   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:47.445192   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:47.881176   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:47.943897   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:48.380253   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:48.444631   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:48.880715   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:48.944669   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:49.383647   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:49.444899   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:49.880271   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:49.945703   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:50.381990   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:50.444442   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:50.882035   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:50.944452   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:51.380219   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:51.444518   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:51.880816   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:51.943839   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:52.381176   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:52.444720   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:52.880500   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:52.944728   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:53.381222   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:53.444202   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:53.884002   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:53.947813   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:54.379947   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:54.444321   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:54.880590   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:54.944830   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:55.380713   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:55.443973   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:55.880138   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:55.943992   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:56.379849   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:56.443914   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:56.879975   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:56.943944   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:57.382066   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:57.444701   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:57.880401   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:57.944477   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:58.380315   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:58.444181   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:58.880217   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:58.944292   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:59.380422   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:59.444893   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:59.879937   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:59.944239   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:00.380402   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:00.444474   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:00.880912   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:00.944133   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:01.380019   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:01.444152   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:01.880499   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:01.945985   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:02.380086   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:02.444558   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:02.880740   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:02.943886   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:03.380509   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:03.444478   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:03.882042   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:03.944570   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:04.381497   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:04.445880   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:04.880382   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:04.943999   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:05.380464   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:05.444673   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:05.881052   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:05.944205   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:06.380439   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:06.444752   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:06.879729   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:06.943924   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:07.380131   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:07.445518   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:07.880700   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:07.945015   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:08.381111   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:08.444090   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:08.880173   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:08.944193   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:09.380396   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:09.444321   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:09.880864   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:09.944315   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:10.380559   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:10.444272   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:10.880333   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:10.943972   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:11.379958   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:11.444098   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:11.880174   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:11.944675   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:12.380162   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:12.444913   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:12.880554   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:12.944546   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:13.380683   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:13.444554   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:13.880746   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:13.944286   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:14.381586   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:14.447462   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:14.880884   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:14.943561   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:15.380454   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:15.444434   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:15.880942   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:15.944353   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:16.379773   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:16.452082   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:16.879647   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:16.943275   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:17.381171   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:17.444583   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:17.880445   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:17.944581   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:18.380669   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:18.443519   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:18.880671   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:18.944409   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:19.380293   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:19.444407   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:19.881228   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:19.944127   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:20.381456   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:20.444425   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:20.880278   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:20.944283   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:21.380305   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:21.444105   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:21.880541   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:21.945060   18056 kapi.go:107] duration metric: took 2m26.005622747s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0803 22:53:22.385398   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:22.880523   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:23.379918   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:23.881514   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:24.380076   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:24.881796   18056 kapi.go:107] duration metric: took 2m26.005315305s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0803 22:53:24.883443   18056 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-110246 cluster.
	I0803 22:53:24.884620   18056 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0803 22:53:24.885719   18056 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0803 22:53:24.886887   18056 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, nvidia-device-plugin, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0803 22:53:24.888065   18056 addons.go:510] duration metric: took 2m38.350114202s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget nvidia-device-plugin yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0803 22:53:24.888099   18056 start.go:246] waiting for cluster config update ...
	I0803 22:53:24.888119   18056 start.go:255] writing updated cluster config ...
	I0803 22:53:24.888396   18056 ssh_runner.go:195] Run: rm -f paused
	I0803 22:53:24.938364   18056 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0803 22:53:24.940116   18056 out.go:177] * Done! kubectl is now configured to use "addons-110246" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.054705612Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722725842054679652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3cc9fd9-1b19-4b15-80e0-d530ab3dcecf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.055277139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1c82618-36c6-4344-a9a7-ef5134461e87 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.055388709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1c82618-36c6-4344-a9a7-ef5134461e87 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.055705987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7bb600c6f663657e9ff2fecffd5ddbc554be92da20e24a1661a25f8a56cc417,PodSandboxId:de174b7cf4ebb1ff3570248169673beb843ed29bbc4916e6c17d1d574cc05095,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722725835021464801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-ssxwk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edf29210-e2de-4bb2-885a-b86e2ea89fda,},Annotations:map[string]string{io.kubernetes.container.hash: e1e3c492,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a2a6d788927136bbd6b0d338da6e707114989773a9bf37f7f004ae5c45f49a,PodSandboxId:3555045f0b2acb65cbd2b611affc11851d9d305249a7be0879c761cee6081881,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722725694192935137,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df82dc23-6a96-474c-90c3-83927b83004d,},Annotations:map[string]string{io.kubernet
es.container.hash: 3192669,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88886060efe9b80726b0163f49d9e06732c4200aebf30b3558e7c6ef1b64191,PodSandboxId:ca16c417d8b2fb0887a378b217f4631ff68847525dbb3e3df13262d04db867c1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722725610810290216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb166619-49c3-48d6-8b
0e-aef40c36a54e,},Annotations:map[string]string{io.kubernetes.container.hash: bb723707,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaa60d007cf2158142a7ae364be1960f02a133016e8d49b4058e25861a8867ba,PodSandboxId:f810fc260af314642e7792cb77aaf497278b1fb197e9fa3696aff111e90a2bde,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722725519770589624,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-66b8c,io.kubernetes.pod.namespace: ingress-nginx,io.kubern
etes.pod.uid: 0f4a5d31-3136-4d18-98c8-063d77af9778,},Annotations:map[string]string{io.kubernetes.container.hash: 795a12ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4cf51b1937c5bc0ef73d7b91c598c3233e9893b16bd8309528315f7a47f4b3d,PodSandboxId:10cdcbaca10e2910c23b301c0ecf9762ef9d3c1c6257257ad725442fe6173bde,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722725519657110692,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-559p7,io.kubernetes.
pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eeadf2e1-6165-43cc-8a0c-d0b67486991c,},Annotations:map[string]string{io.kubernetes.container.hash: 11257a42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820918c8d761eb1b5ba780f6d7b16ea38fa9fdbc956267f7567f65217b6b26e,PodSandboxId:6f48d692b5da414c64e36be336fdafcb580bfa48d15c1f592a3dff62c3815a37,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722725504502137835,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: m
etrics-server-c59844bb4-wbhpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb904756-9056-4069-b53b-b35f8c0bde90,},Annotations:map[string]string{io.kubernetes.container.hash: fc9b69ad,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca6528238e5e51859a0c676bd684cca55eece8b443052df4eeebde188634715,PodSandboxId:90e74ade0381446d40631423aec63d569f691a131283c7345732534efbceff96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722725453349499106,Lab
els:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4abb12c4-8b99-40af-8da9-1f36ecb668a0,},Annotations:map[string]string{io.kubernetes.container.hash: 344623a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319bbb8331ea75f685cc311227fb4b0a7c55495dae263025bb13d01c1768ca7f,PodSandboxId:66b36c9e8efbdaea9ee3173b7ab00ecfd2461f86f69f3ffe4e63e5381855afdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722725450678216260,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hbp7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9309e8e-3027-46d2-b989-2f285fcf10f4,},Annotations:map[string]string{io.kubernetes.container.hash: 13b8ab0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314af48a2afb05bd8ffa1c1fb970955f6d2a8456e4994365714c716f65ea906f,PodSandboxId:95e040ab7ea7008cfb93dc9653f6aafc08c8a243e9808206807f148a9d54d577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722725448007772807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfl9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77bd9bb9-4577-4a8c-bdd2-970a32e4467b,},Annotations:map[string]string{io.kubernetes.container.hash: adc08610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dceba7bfaac39df8f29b99d4d543c47a7100131a23ed92feacdfcaf2ef7efd2,PodSandboxId:8b54663b3bc8c9cf5acbbf1b5b7cb80c1974562667ca2806ad06c193b4764165,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722725427859901938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76e36030b906568b4cf9b484097683b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1160ba49f76d2377defb95d70fbf7c9a6b02c5a9146cde8fc9fe9e9ca86ac2eb,PodSandboxId:3279939d63505401b2016dfb25b692f6d80746089d95c4bcf2205933158292f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722725427868705147,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19c77aa6a9ed591032cbe3a2a15eae3,},Annotations:map[string]string{io.kubernetes.container.hash: 25646d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c7d5bcf5b65c4346bf3a483804db253b3bd75a46f8b4de9b1b457ff70397d1,PodSandboxId:1e6afb16b919742e6f309063fcaad698565e27e0eb5bd52b749fbb1d4e754938,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b
61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722725427805553588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 807c4463fad0440cc25c5dd70b946b98,},Annotations:map[string]string{io.kubernetes.container.hash: 23947cef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc2277cc91fde6d12da0df934beae4bfebc8572f161104e46027ea35c834717,PodSandboxId:8b396e9d2266cb7b77cf603c81b8458da9a29f5e0312e23693be3336cf7ae01c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09ca
acbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722725427809104889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9663ccb7cf85506aa8bb62dc8cd9fe6a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1c82618-36c6-4344-a9a7-ef5134461e87 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.093690034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3341caf1-6caa-4cfc-bcb7-5df894f12592 name=/runtime.v1.RuntimeService/Version
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.093760182Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3341caf1-6caa-4cfc-bcb7-5df894f12592 name=/runtime.v1.RuntimeService/Version
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.095454231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dae1f5f8-999e-4a5c-9f6e-19aac192c49c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.097025985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722725842096998362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dae1f5f8-999e-4a5c-9f6e-19aac192c49c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.097779119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae41a994-e7df-49b5-acfd-4f9067aa819a name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.097829548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae41a994-e7df-49b5-acfd-4f9067aa819a name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.098127019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7bb600c6f663657e9ff2fecffd5ddbc554be92da20e24a1661a25f8a56cc417,PodSandboxId:de174b7cf4ebb1ff3570248169673beb843ed29bbc4916e6c17d1d574cc05095,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722725835021464801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-ssxwk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edf29210-e2de-4bb2-885a-b86e2ea89fda,},Annotations:map[string]string{io.kubernetes.container.hash: e1e3c492,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a2a6d788927136bbd6b0d338da6e707114989773a9bf37f7f004ae5c45f49a,PodSandboxId:3555045f0b2acb65cbd2b611affc11851d9d305249a7be0879c761cee6081881,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722725694192935137,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df82dc23-6a96-474c-90c3-83927b83004d,},Annotations:map[string]string{io.kubernet
es.container.hash: 3192669,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88886060efe9b80726b0163f49d9e06732c4200aebf30b3558e7c6ef1b64191,PodSandboxId:ca16c417d8b2fb0887a378b217f4631ff68847525dbb3e3df13262d04db867c1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722725610810290216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb166619-49c3-48d6-8b
0e-aef40c36a54e,},Annotations:map[string]string{io.kubernetes.container.hash: bb723707,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaa60d007cf2158142a7ae364be1960f02a133016e8d49b4058e25861a8867ba,PodSandboxId:f810fc260af314642e7792cb77aaf497278b1fb197e9fa3696aff111e90a2bde,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722725519770589624,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-66b8c,io.kubernetes.pod.namespace: ingress-nginx,io.kubern
etes.pod.uid: 0f4a5d31-3136-4d18-98c8-063d77af9778,},Annotations:map[string]string{io.kubernetes.container.hash: 795a12ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4cf51b1937c5bc0ef73d7b91c598c3233e9893b16bd8309528315f7a47f4b3d,PodSandboxId:10cdcbaca10e2910c23b301c0ecf9762ef9d3c1c6257257ad725442fe6173bde,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722725519657110692,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-559p7,io.kubernetes.
pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eeadf2e1-6165-43cc-8a0c-d0b67486991c,},Annotations:map[string]string{io.kubernetes.container.hash: 11257a42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820918c8d761eb1b5ba780f6d7b16ea38fa9fdbc956267f7567f65217b6b26e,PodSandboxId:6f48d692b5da414c64e36be336fdafcb580bfa48d15c1f592a3dff62c3815a37,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722725504502137835,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: m
etrics-server-c59844bb4-wbhpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb904756-9056-4069-b53b-b35f8c0bde90,},Annotations:map[string]string{io.kubernetes.container.hash: fc9b69ad,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca6528238e5e51859a0c676bd684cca55eece8b443052df4eeebde188634715,PodSandboxId:90e74ade0381446d40631423aec63d569f691a131283c7345732534efbceff96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722725453349499106,Lab
els:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4abb12c4-8b99-40af-8da9-1f36ecb668a0,},Annotations:map[string]string{io.kubernetes.container.hash: 344623a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319bbb8331ea75f685cc311227fb4b0a7c55495dae263025bb13d01c1768ca7f,PodSandboxId:66b36c9e8efbdaea9ee3173b7ab00ecfd2461f86f69f3ffe4e63e5381855afdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722725450678216260,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hbp7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9309e8e-3027-46d2-b989-2f285fcf10f4,},Annotations:map[string]string{io.kubernetes.container.hash: 13b8ab0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314af48a2afb05bd8ffa1c1fb970955f6d2a8456e4994365714c716f65ea906f,PodSandboxId:95e040ab7ea7008cfb93dc9653f6aafc08c8a243e9808206807f148a9d54d577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722725448007772807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfl9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77bd9bb9-4577-4a8c-bdd2-970a32e4467b,},Annotations:map[string]string{io.kubernetes.container.hash: adc08610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dceba7bfaac39df8f29b99d4d543c47a7100131a23ed92feacdfcaf2ef7efd2,PodSandboxId:8b54663b3bc8c9cf5acbbf1b5b7cb80c1974562667ca2806ad06c193b4764165,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722725427859901938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76e36030b906568b4cf9b484097683b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1160ba49f76d2377defb95d70fbf7c9a6b02c5a9146cde8fc9fe9e9ca86ac2eb,PodSandboxId:3279939d63505401b2016dfb25b692f6d80746089d95c4bcf2205933158292f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722725427868705147,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19c77aa6a9ed591032cbe3a2a15eae3,},Annotations:map[string]string{io.kubernetes.container.hash: 25646d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c7d5bcf5b65c4346bf3a483804db253b3bd75a46f8b4de9b1b457ff70397d1,PodSandboxId:1e6afb16b919742e6f309063fcaad698565e27e0eb5bd52b749fbb1d4e754938,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b
61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722725427805553588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 807c4463fad0440cc25c5dd70b946b98,},Annotations:map[string]string{io.kubernetes.container.hash: 23947cef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc2277cc91fde6d12da0df934beae4bfebc8572f161104e46027ea35c834717,PodSandboxId:8b396e9d2266cb7b77cf603c81b8458da9a29f5e0312e23693be3336cf7ae01c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09ca
acbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722725427809104889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9663ccb7cf85506aa8bb62dc8cd9fe6a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae41a994-e7df-49b5-acfd-4f9067aa819a name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.147109942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f67f7bba-3793-416c-baa9-883b1f962333 name=/runtime.v1.RuntimeService/Version
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.147211066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f67f7bba-3793-416c-baa9-883b1f962333 name=/runtime.v1.RuntimeService/Version
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.152608617Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60da81e1-2a70-43f8-a1da-d20b92f6be8c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.153921055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722725842153891816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60da81e1-2a70-43f8-a1da-d20b92f6be8c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.155545308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=489f1914-0a0f-4094-856f-fcc3a5dc813e name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.155629118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=489f1914-0a0f-4094-856f-fcc3a5dc813e name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.155926836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7bb600c6f663657e9ff2fecffd5ddbc554be92da20e24a1661a25f8a56cc417,PodSandboxId:de174b7cf4ebb1ff3570248169673beb843ed29bbc4916e6c17d1d574cc05095,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722725835021464801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-ssxwk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edf29210-e2de-4bb2-885a-b86e2ea89fda,},Annotations:map[string]string{io.kubernetes.container.hash: e1e3c492,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a2a6d788927136bbd6b0d338da6e707114989773a9bf37f7f004ae5c45f49a,PodSandboxId:3555045f0b2acb65cbd2b611affc11851d9d305249a7be0879c761cee6081881,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722725694192935137,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df82dc23-6a96-474c-90c3-83927b83004d,},Annotations:map[string]string{io.kubernet
es.container.hash: 3192669,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88886060efe9b80726b0163f49d9e06732c4200aebf30b3558e7c6ef1b64191,PodSandboxId:ca16c417d8b2fb0887a378b217f4631ff68847525dbb3e3df13262d04db867c1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722725610810290216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb166619-49c3-48d6-8b
0e-aef40c36a54e,},Annotations:map[string]string{io.kubernetes.container.hash: bb723707,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaa60d007cf2158142a7ae364be1960f02a133016e8d49b4058e25861a8867ba,PodSandboxId:f810fc260af314642e7792cb77aaf497278b1fb197e9fa3696aff111e90a2bde,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722725519770589624,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-66b8c,io.kubernetes.pod.namespace: ingress-nginx,io.kubern
etes.pod.uid: 0f4a5d31-3136-4d18-98c8-063d77af9778,},Annotations:map[string]string{io.kubernetes.container.hash: 795a12ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4cf51b1937c5bc0ef73d7b91c598c3233e9893b16bd8309528315f7a47f4b3d,PodSandboxId:10cdcbaca10e2910c23b301c0ecf9762ef9d3c1c6257257ad725442fe6173bde,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722725519657110692,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-559p7,io.kubernetes.
pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eeadf2e1-6165-43cc-8a0c-d0b67486991c,},Annotations:map[string]string{io.kubernetes.container.hash: 11257a42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820918c8d761eb1b5ba780f6d7b16ea38fa9fdbc956267f7567f65217b6b26e,PodSandboxId:6f48d692b5da414c64e36be336fdafcb580bfa48d15c1f592a3dff62c3815a37,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722725504502137835,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: m
etrics-server-c59844bb4-wbhpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb904756-9056-4069-b53b-b35f8c0bde90,},Annotations:map[string]string{io.kubernetes.container.hash: fc9b69ad,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca6528238e5e51859a0c676bd684cca55eece8b443052df4eeebde188634715,PodSandboxId:90e74ade0381446d40631423aec63d569f691a131283c7345732534efbceff96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722725453349499106,Lab
els:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4abb12c4-8b99-40af-8da9-1f36ecb668a0,},Annotations:map[string]string{io.kubernetes.container.hash: 344623a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319bbb8331ea75f685cc311227fb4b0a7c55495dae263025bb13d01c1768ca7f,PodSandboxId:66b36c9e8efbdaea9ee3173b7ab00ecfd2461f86f69f3ffe4e63e5381855afdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722725450678216260,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hbp7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9309e8e-3027-46d2-b989-2f285fcf10f4,},Annotations:map[string]string{io.kubernetes.container.hash: 13b8ab0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314af48a2afb05bd8ffa1c1fb970955f6d2a8456e4994365714c716f65ea906f,PodSandboxId:95e040ab7ea7008cfb93dc9653f6aafc08c8a243e9808206807f148a9d54d577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722725448007772807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfl9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77bd9bb9-4577-4a8c-bdd2-970a32e4467b,},Annotations:map[string]string{io.kubernetes.container.hash: adc08610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dceba7bfaac39df8f29b99d4d543c47a7100131a23ed92feacdfcaf2ef7efd2,PodSandboxId:8b54663b3bc8c9cf5acbbf1b5b7cb80c1974562667ca2806ad06c193b4764165,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722725427859901938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76e36030b906568b4cf9b484097683b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1160ba49f76d2377defb95d70fbf7c9a6b02c5a9146cde8fc9fe9e9ca86ac2eb,PodSandboxId:3279939d63505401b2016dfb25b692f6d80746089d95c4bcf2205933158292f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722725427868705147,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19c77aa6a9ed591032cbe3a2a15eae3,},Annotations:map[string]string{io.kubernetes.container.hash: 25646d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c7d5bcf5b65c4346bf3a483804db253b3bd75a46f8b4de9b1b457ff70397d1,PodSandboxId:1e6afb16b919742e6f309063fcaad698565e27e0eb5bd52b749fbb1d4e754938,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b
61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722725427805553588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 807c4463fad0440cc25c5dd70b946b98,},Annotations:map[string]string{io.kubernetes.container.hash: 23947cef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc2277cc91fde6d12da0df934beae4bfebc8572f161104e46027ea35c834717,PodSandboxId:8b396e9d2266cb7b77cf603c81b8458da9a29f5e0312e23693be3336cf7ae01c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09ca
acbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722725427809104889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9663ccb7cf85506aa8bb62dc8cd9fe6a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=489f1914-0a0f-4094-856f-fcc3a5dc813e name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.189277864Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6af13b9-f084-4076-b0c9-ecd9ee08681f name=/runtime.v1.RuntimeService/Version
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.189424030Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6af13b9-f084-4076-b0c9-ecd9ee08681f name=/runtime.v1.RuntimeService/Version
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.190581483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d52098e2-d08a-4653-a036-3ba61d8f6b31 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.191827873Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722725842191799066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d52098e2-d08a-4653-a036-3ba61d8f6b31 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.192635703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2dbd970a-db72-4662-9468-55d3d6945997 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.192705717Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2dbd970a-db72-4662-9468-55d3d6945997 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:57:22 addons-110246 crio[681]: time="2024-08-03 22:57:22.193011727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7bb600c6f663657e9ff2fecffd5ddbc554be92da20e24a1661a25f8a56cc417,PodSandboxId:de174b7cf4ebb1ff3570248169673beb843ed29bbc4916e6c17d1d574cc05095,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722725835021464801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-ssxwk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edf29210-e2de-4bb2-885a-b86e2ea89fda,},Annotations:map[string]string{io.kubernetes.container.hash: e1e3c492,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a2a6d788927136bbd6b0d338da6e707114989773a9bf37f7f004ae5c45f49a,PodSandboxId:3555045f0b2acb65cbd2b611affc11851d9d305249a7be0879c761cee6081881,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722725694192935137,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df82dc23-6a96-474c-90c3-83927b83004d,},Annotations:map[string]string{io.kubernet
es.container.hash: 3192669,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88886060efe9b80726b0163f49d9e06732c4200aebf30b3558e7c6ef1b64191,PodSandboxId:ca16c417d8b2fb0887a378b217f4631ff68847525dbb3e3df13262d04db867c1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722725610810290216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb166619-49c3-48d6-8b
0e-aef40c36a54e,},Annotations:map[string]string{io.kubernetes.container.hash: bb723707,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaa60d007cf2158142a7ae364be1960f02a133016e8d49b4058e25861a8867ba,PodSandboxId:f810fc260af314642e7792cb77aaf497278b1fb197e9fa3696aff111e90a2bde,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722725519770589624,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-66b8c,io.kubernetes.pod.namespace: ingress-nginx,io.kubern
etes.pod.uid: 0f4a5d31-3136-4d18-98c8-063d77af9778,},Annotations:map[string]string{io.kubernetes.container.hash: 795a12ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4cf51b1937c5bc0ef73d7b91c598c3233e9893b16bd8309528315f7a47f4b3d,PodSandboxId:10cdcbaca10e2910c23b301c0ecf9762ef9d3c1c6257257ad725442fe6173bde,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722725519657110692,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-559p7,io.kubernetes.
pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eeadf2e1-6165-43cc-8a0c-d0b67486991c,},Annotations:map[string]string{io.kubernetes.container.hash: 11257a42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820918c8d761eb1b5ba780f6d7b16ea38fa9fdbc956267f7567f65217b6b26e,PodSandboxId:6f48d692b5da414c64e36be336fdafcb580bfa48d15c1f592a3dff62c3815a37,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722725504502137835,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: m
etrics-server-c59844bb4-wbhpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb904756-9056-4069-b53b-b35f8c0bde90,},Annotations:map[string]string{io.kubernetes.container.hash: fc9b69ad,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca6528238e5e51859a0c676bd684cca55eece8b443052df4eeebde188634715,PodSandboxId:90e74ade0381446d40631423aec63d569f691a131283c7345732534efbceff96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722725453349499106,Lab
els:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4abb12c4-8b99-40af-8da9-1f36ecb668a0,},Annotations:map[string]string{io.kubernetes.container.hash: 344623a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319bbb8331ea75f685cc311227fb4b0a7c55495dae263025bb13d01c1768ca7f,PodSandboxId:66b36c9e8efbdaea9ee3173b7ab00ecfd2461f86f69f3ffe4e63e5381855afdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722725450678216260,Labels:map[string]string{io
.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hbp7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9309e8e-3027-46d2-b989-2f285fcf10f4,},Annotations:map[string]string{io.kubernetes.container.hash: 13b8ab0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314af48a2afb05bd8ffa1c1fb970955f6d2a8456e4994365714c716f65ea906f,PodSandboxId:95e040ab7ea7008cfb93dc9653f6aafc08c8a243e9808206807f148a9d54d577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722725448007772807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfl9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77bd9bb9-4577-4a8c-bdd2-970a32e4467b,},Annotations:map[string]string{io.kubernetes.container.hash: adc08610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dceba7bfaac39df8f29b99d4d543c47a7100131a23ed92feacdfcaf2ef7efd2,PodSandboxId:8b54663b3bc8c9cf5acbbf1b5b7cb80c1974562667ca2806ad06c193b4764165,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722725427859901938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76e36030b906568b4cf9b484097683b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1160ba49f76d2377defb95d70fbf7c9a6b02c5a9146cde8fc9fe9e9ca86ac2eb,PodSandboxId:3279939d63505401b2016dfb25b692f6d80746089d95c4bcf2205933158292f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722725427868705147,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19c77aa6a9ed591032cbe3a2a15eae3,},Annotations:map[string]string{io.kubernetes.container.hash: 25646d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c7d5bcf5b65c4346bf3a483804db253b3bd75a46f8b4de9b1b457ff70397d1,PodSandboxId:1e6afb16b919742e6f309063fcaad698565e27e0eb5bd52b749fbb1d4e754938,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b
61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722725427805553588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 807c4463fad0440cc25c5dd70b946b98,},Annotations:map[string]string{io.kubernetes.container.hash: 23947cef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc2277cc91fde6d12da0df934beae4bfebc8572f161104e46027ea35c834717,PodSandboxId:8b396e9d2266cb7b77cf603c81b8458da9a29f5e0312e23693be3336cf7ae01c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09ca
acbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722725427809104889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9663ccb7cf85506aa8bb62dc8cd9fe6a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2dbd970a-db72-4662-9468-55d3d6945997 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d7bb600c6f663       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   de174b7cf4ebb       hello-world-app-6778b5fc9f-ssxwk
	33a2a6d788927       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   3555045f0b2ac       nginx
	a88886060efe9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   ca16c417d8b2f       busybox
	eaa60d007cf21       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   5 minutes ago       Exited              patch                     0                   f810fc260af31       ingress-nginx-admission-patch-66b8c
	a4cf51b1937c5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   5 minutes ago       Exited              create                    0                   10cdcbaca10e2       ingress-nginx-admission-create-559p7
	2820918c8d761       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        5 minutes ago       Running             metrics-server            0                   6f48d692b5da4       metrics-server-c59844bb4-wbhpt
	cca6528238e5e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             6 minutes ago       Running             storage-provisioner       0                   90e74ade03814       storage-provisioner
	319bbb8331ea7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             6 minutes ago       Running             coredns                   0                   66b36c9e8efbd       coredns-7db6d8ff4d-hbp7b
	314af48a2afb0       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             6 minutes ago       Running             kube-proxy                0                   95e040ab7ea70       kube-proxy-lfl9m
	1160ba49f76d2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             6 minutes ago       Running             etcd                      0                   3279939d63505       etcd-addons-110246
	3dceba7bfaac3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             6 minutes ago       Running             kube-controller-manager   0                   8b54663b3bc8c       kube-controller-manager-addons-110246
	4cc2277cc91fd       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             6 minutes ago       Running             kube-scheduler            0                   8b396e9d2266c       kube-scheduler-addons-110246
	f6c7d5bcf5b65       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             6 minutes ago       Running             kube-apiserver            0                   1e6afb16b9197       kube-apiserver-addons-110246
	
	
	==> coredns [319bbb8331ea75f685cc311227fb4b0a7c55495dae263025bb13d01c1768ca7f] <==
	[INFO] 10.244.0.7:46429 - 34813 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000618261s
	[INFO] 10.244.0.7:39065 - 2231 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093225s
	[INFO] 10.244.0.7:39065 - 28808 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064501s
	[INFO] 10.244.0.7:47178 - 40385 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083219s
	[INFO] 10.244.0.7:47178 - 40647 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051866s
	[INFO] 10.244.0.7:33924 - 58568 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000084451s
	[INFO] 10.244.0.7:33924 - 20681 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000064526s
	[INFO] 10.244.0.7:42520 - 26215 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075904s
	[INFO] 10.244.0.7:42520 - 32362 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075269s
	[INFO] 10.244.0.7:54734 - 26138 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051652s
	[INFO] 10.244.0.7:54734 - 6757 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000133727s
	[INFO] 10.244.0.7:33086 - 44761 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047933s
	[INFO] 10.244.0.7:33086 - 34263 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003706s
	[INFO] 10.244.0.7:56729 - 39231 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000036905s
	[INFO] 10.244.0.7:56729 - 64317 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000039048s
	[INFO] 10.244.0.22:46021 - 16683 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00049237s
	[INFO] 10.244.0.22:41228 - 17048 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181134s
	[INFO] 10.244.0.22:41641 - 54046 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091817s
	[INFO] 10.244.0.22:49214 - 42036 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000062518s
	[INFO] 10.244.0.22:53977 - 49572 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116389s
	[INFO] 10.244.0.22:42481 - 20413 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000179361s
	[INFO] 10.244.0.22:34241 - 26941 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.002271059s
	[INFO] 10.244.0.22:38459 - 42482 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002947553s
	[INFO] 10.244.0.26:54543 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000389559s
	[INFO] 10.244.0.26:39747 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000156887s
	
	
	==> describe nodes <==
	Name:               addons-110246
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-110246
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=addons-110246
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T22_50_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-110246
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 22:50:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-110246
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 22:57:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 22:55:08 +0000   Sat, 03 Aug 2024 22:50:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 22:55:08 +0000   Sat, 03 Aug 2024 22:50:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 22:55:08 +0000   Sat, 03 Aug 2024 22:50:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 22:55:08 +0000   Sat, 03 Aug 2024 22:50:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    addons-110246
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 b301beb5cf5941c6ab473fb46617cd1b
	  System UUID:                b301beb5-cf59-41c6-ab47-3fb46617cd1b
	  Boot ID:                    65f6a715-e3fe-407e-a1ea-bee8318505e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  default                     hello-world-app-6778b5fc9f-ssxwk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 coredns-7db6d8ff4d-hbp7b                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     6m36s
	  kube-system                 etcd-addons-110246                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m49s
	  kube-system                 kube-apiserver-addons-110246             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  kube-system                 kube-controller-manager-addons-110246    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  kube-system                 kube-proxy-lfl9m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	  kube-system                 kube-scheduler-addons-110246             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  kube-system                 metrics-server-c59844bb4-wbhpt           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         6m30s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m33s                  kube-proxy       
	  Normal  Starting                 6m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m55s (x8 over 6m55s)  kubelet          Node addons-110246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m55s (x8 over 6m55s)  kubelet          Node addons-110246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m55s (x7 over 6m55s)  kubelet          Node addons-110246 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m49s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m49s                  kubelet          Node addons-110246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m49s                  kubelet          Node addons-110246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m49s                  kubelet          Node addons-110246 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m48s                  kubelet          Node addons-110246 status is now: NodeReady
	  Normal  RegisteredNode           6m36s                  node-controller  Node addons-110246 event: Registered Node addons-110246 in Controller
	
	
	==> dmesg <==
	[  +9.994140] kauditd_printk_skb: 27 callbacks suppressed
	[  +8.757392] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.223665] kauditd_printk_skb: 2 callbacks suppressed
	[Aug 3 22:52] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.002045] kauditd_printk_skb: 81 callbacks suppressed
	[  +5.252506] kauditd_printk_skb: 9 callbacks suppressed
	[ +37.165201] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 3 22:53] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.356321] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.369691] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.280988] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.775656] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.775213] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.087647] kauditd_printk_skb: 21 callbacks suppressed
	[Aug 3 22:54] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.158994] kauditd_printk_skb: 63 callbacks suppressed
	[ +13.999768] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.397076] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.239854] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.052740] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.219253] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.267933] kauditd_printk_skb: 22 callbacks suppressed
	[Aug 3 22:55] kauditd_printk_skb: 33 callbacks suppressed
	[Aug 3 22:57] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.205456] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [1160ba49f76d2377defb95d70fbf7c9a6b02c5a9146cde8fc9fe9e9ca86ac2eb] <==
	{"level":"info","ts":"2024-08-03T22:51:41.377136Z","caller":"traceutil/trace.go:171","msg":"trace[714241903] transaction","detail":"{read_only:false; response_revision:972; number_of_response:1; }","duration":"294.592682ms","start":"2024-08-03T22:51:41.082537Z","end":"2024-08-03T22:51:41.377129Z","steps":["trace[714241903] 'process raft request'  (duration: 293.982426ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-03T22:51:41.3784Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.65254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85094"}
	{"level":"info","ts":"2024-08-03T22:51:41.379607Z","caller":"traceutil/trace.go:171","msg":"trace[871921271] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:972; }","duration":"221.887538ms","start":"2024-08-03T22:51:41.157708Z","end":"2024-08-03T22:51:41.379595Z","steps":["trace[871921271] 'agreement among raft nodes before linearized reading'  (duration: 219.183412ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:51:55.829604Z","caller":"traceutil/trace.go:171","msg":"trace[1535032267] linearizableReadLoop","detail":"{readStateIndex:1044; appliedIndex:1043; }","duration":"171.632725ms","start":"2024-08-03T22:51:55.657951Z","end":"2024-08-03T22:51:55.829584Z","steps":["trace[1535032267] 'read index received'  (duration: 171.504959ms)","trace[1535032267] 'applied index is now lower than readState.Index'  (duration: 127.323µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-03T22:51:55.829971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.957141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85371"}
	{"level":"info","ts":"2024-08-03T22:51:55.829999Z","caller":"traceutil/trace.go:171","msg":"trace[1590357607] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1013; }","duration":"172.065255ms","start":"2024-08-03T22:51:55.657926Z","end":"2024-08-03T22:51:55.829991Z","steps":["trace[1590357607] 'agreement among raft nodes before linearized reading'  (duration: 171.744404ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:51:55.830146Z","caller":"traceutil/trace.go:171","msg":"trace[878050054] transaction","detail":"{read_only:false; response_revision:1013; number_of_response:1; }","duration":"363.251446ms","start":"2024-08-03T22:51:55.466881Z","end":"2024-08-03T22:51:55.830133Z","steps":["trace[878050054] 'process raft request'  (duration: 362.615561ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-03T22:51:55.830245Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-03T22:51:55.466865Z","time spent":"363.311266ms","remote":"127.0.0.1:54508","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1005 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-03T22:52:03.010966Z","caller":"traceutil/trace.go:171","msg":"trace[1507259759] linearizableReadLoop","detail":"{readStateIndex:1090; appliedIndex:1089; }","duration":"353.657155ms","start":"2024-08-03T22:52:02.657288Z","end":"2024-08-03T22:52:03.010945Z","steps":["trace[1507259759] 'read index received'  (duration: 353.519254ms)","trace[1507259759] 'applied index is now lower than readState.Index'  (duration: 137.279µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-03T22:52:03.010992Z","caller":"traceutil/trace.go:171","msg":"trace[744862575] transaction","detail":"{read_only:false; response_revision:1057; number_of_response:1; }","duration":"477.472748ms","start":"2024-08-03T22:52:02.533498Z","end":"2024-08-03T22:52:03.01097Z","steps":["trace[744862575] 'process raft request'  (duration: 477.29274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-03T22:52:03.011186Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-03T22:52:02.533482Z","time spent":"477.600855ms","remote":"127.0.0.1:54430","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":798,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-c59844bb4-wbhpt.17e859b85bc6d6a0\" mod_revision:1003 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-c59844bb4-wbhpt.17e859b85bc6d6a0\" value_size:704 lease:6583014823065036420 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-c59844bb4-wbhpt.17e859b85bc6d6a0\" > >"}
	{"level":"warn","ts":"2024-08-03T22:52:03.011267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.967202ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85462"}
	{"level":"info","ts":"2024-08-03T22:52:03.011343Z","caller":"traceutil/trace.go:171","msg":"trace[539765651] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1057; }","duration":"354.070685ms","start":"2024-08-03T22:52:02.657263Z","end":"2024-08-03T22:52:03.011334Z","steps":["trace[539765651] 'agreement among raft nodes before linearized reading'  (duration: 353.785371ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-03T22:52:03.011385Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-03T22:52:02.65725Z","time spent":"354.127824ms","remote":"127.0.0.1:54526","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85486,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-08-03T22:52:03.016886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.723818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11155"}
	{"level":"info","ts":"2024-08-03T22:52:03.016927Z","caller":"traceutil/trace.go:171","msg":"trace[933517965] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1058; }","duration":"147.79941ms","start":"2024-08-03T22:52:02.869118Z","end":"2024-08-03T22:52:03.016917Z","steps":["trace[933517965] 'agreement among raft nodes before linearized reading'  (duration: 147.621881ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:52:03.017221Z","caller":"traceutil/trace.go:171","msg":"trace[1748400383] transaction","detail":"{read_only:false; response_revision:1058; number_of_response:1; }","duration":"207.568273ms","start":"2024-08-03T22:52:02.809643Z","end":"2024-08-03T22:52:03.017212Z","steps":["trace[1748400383] 'process raft request'  (duration: 207.028054ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:52:41.340707Z","caller":"traceutil/trace.go:171","msg":"trace[1642624007] transaction","detail":"{read_only:false; response_revision:1211; number_of_response:1; }","duration":"111.909244ms","start":"2024-08-03T22:52:41.228782Z","end":"2024-08-03T22:52:41.340691Z","steps":["trace[1642624007] 'process raft request'  (duration: 111.793851ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:53:17.784235Z","caller":"traceutil/trace.go:171","msg":"trace[1355242074] linearizableReadLoop","detail":"{readStateIndex:1322; appliedIndex:1321; }","duration":"187.656328ms","start":"2024-08-03T22:53:17.596545Z","end":"2024-08-03T22:53:17.784201Z","steps":["trace[1355242074] 'read index received'  (duration: 183.972973ms)","trace[1355242074] 'applied index is now lower than readState.Index'  (duration: 3.682069ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-03T22:53:17.784638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.0156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-08-03T22:53:17.784828Z","caller":"traceutil/trace.go:171","msg":"trace[499869429] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1273; }","duration":"188.248811ms","start":"2024-08-03T22:53:17.596506Z","end":"2024-08-03T22:53:17.784754Z","steps":["trace[499869429] 'agreement among raft nodes before linearized reading'  (duration: 187.75768ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:54:12.824257Z","caller":"traceutil/trace.go:171","msg":"trace[1832367938] transaction","detail":"{read_only:false; response_revision:1640; number_of_response:1; }","duration":"165.337953ms","start":"2024-08-03T22:54:12.658854Z","end":"2024-08-03T22:54:12.824192Z","steps":["trace[1832367938] 'process raft request'  (duration: 165.233428ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:54:18.287859Z","caller":"traceutil/trace.go:171","msg":"trace[450811793] transaction","detail":"{read_only:false; response_revision:1659; number_of_response:1; }","duration":"129.598611ms","start":"2024-08-03T22:54:18.158244Z","end":"2024-08-03T22:54:18.287843Z","steps":["trace[450811793] 'process raft request'  (duration: 129.276389ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:54:25.053784Z","caller":"traceutil/trace.go:171","msg":"trace[871754638] transaction","detail":"{read_only:false; response_revision:1686; number_of_response:1; }","duration":"167.61343ms","start":"2024-08-03T22:54:24.886155Z","end":"2024-08-03T22:54:25.053769Z","steps":["trace[871754638] 'process raft request'  (duration: 167.496357ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:54:52.183009Z","caller":"traceutil/trace.go:171","msg":"trace[2089842323] transaction","detail":"{read_only:false; response_revision:1869; number_of_response:1; }","duration":"175.992276ms","start":"2024-08-03T22:54:52.006989Z","end":"2024-08-03T22:54:52.182982Z","steps":["trace[2089842323] 'process raft request'  (duration: 175.545582ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:57:22 up 7 min,  0 users,  load average: 0.28, 0.65, 0.42
	Linux addons-110246 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f6c7d5bcf5b65c4346bf3a483804db253b3bd75a46f8b4de9b1b457ff70397d1] <==
	E0803 22:52:52.565202       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.55.238:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.55.238:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.55.238:443: connect: connection refused
	I0803 22:52:52.641935       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0803 22:53:37.766073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:45212: use of closed network connection
	E0803 22:53:37.946678       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:45228: use of closed network connection
	I0803 22:54:05.044158       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.232.213"}
	E0803 22:54:16.056152       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0803 22:54:25.691634       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0803 22:54:36.155453       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.9:8443->10.244.0.30:57560: read: connection reset by peer
	I0803 22:54:43.874785       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0803 22:54:44.913009       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0803 22:54:49.389381       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0803 22:54:49.566910       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.132.199"}
	I0803 22:55:05.504042       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0803 22:55:05.504177       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0803 22:55:05.544256       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0803 22:55:05.544558       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0803 22:55:05.546893       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0803 22:55:05.547266       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0803 22:55:05.554953       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0803 22:55:05.555046       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0803 22:55:05.605818       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	W0803 22:55:06.548208       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0803 22:55:06.607391       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0803 22:55:06.607391       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0803 22:57:12.099821       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.52.191"}
	
	
	==> kube-controller-manager [3dceba7bfaac39df8f29b99d4d543c47a7100131a23ed92feacdfcaf2ef7efd2] <==
	W0803 22:56:10.417497       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:56:10.417807       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:56:10.924035       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:56:10.924287       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:56:13.224372       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:56:13.224476       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:56:16.719710       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:56:16.719841       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:56:55.139554       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:56:55.139671       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:57:05.577561       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:57:05.577597       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:57:07.676909       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:57:07.677015       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0803 22:57:11.938991       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="27.217872ms"
	I0803 22:57:11.959659       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="20.218188ms"
	I0803 22:57:11.983586       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="23.871983ms"
	I0803 22:57:11.983800       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="105.136µs"
	W0803 22:57:13.831762       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:57:13.831795       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0803 22:57:14.244899       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0803 22:57:14.247607       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="11.391µs"
	I0803 22:57:14.254432       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0803 22:57:15.617185       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="7.442491ms"
	I0803 22:57:15.617668       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="44.845µs"
	
	
	==> kube-proxy [314af48a2afb05bd8ffa1c1fb970955f6d2a8456e4994365714c716f65ea906f] <==
	I0803 22:50:48.677270       1 server_linux.go:69] "Using iptables proxy"
	I0803 22:50:48.703273       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.9"]
	I0803 22:50:48.787529       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 22:50:48.787578       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 22:50:48.787598       1 server_linux.go:165] "Using iptables Proxier"
	I0803 22:50:48.790584       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 22:50:48.790854       1 server.go:872] "Version info" version="v1.30.3"
	I0803 22:50:48.790866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 22:50:48.792105       1 config.go:192] "Starting service config controller"
	I0803 22:50:48.792119       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 22:50:48.792145       1 config.go:101] "Starting endpoint slice config controller"
	I0803 22:50:48.792149       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 22:50:48.793106       1 config.go:319] "Starting node config controller"
	I0803 22:50:48.793114       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 22:50:48.894466       1 shared_informer.go:320] Caches are synced for service config
	I0803 22:50:48.894511       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 22:50:48.894478       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4cc2277cc91fde6d12da0df934beae4bfebc8572f161104e46027ea35c834717] <==
	W0803 22:50:30.514587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0803 22:50:30.517514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0803 22:50:31.347553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0803 22:50:31.347668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0803 22:50:31.378094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 22:50:31.378192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0803 22:50:31.400194       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0803 22:50:31.400241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0803 22:50:31.589990       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 22:50:31.590037       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0803 22:50:31.631617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0803 22:50:31.631820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0803 22:50:31.646583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 22:50:31.646632       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 22:50:31.668007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 22:50:31.668050       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0803 22:50:31.696226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 22:50:31.696273       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0803 22:50:31.727840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 22:50:31.729465       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0803 22:50:31.739966       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0803 22:50:31.740095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0803 22:50:31.795193       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 22:50:31.795242       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0803 22:50:34.496493       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 03 22:57:11 addons-110246 kubelet[1273]: I0803 22:57:11.942957    1273 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d4d7011-2902-48df-a117-b7afc2e94916" containerName="hostpath"
	Aug 03 22:57:11 addons-110246 kubelet[1273]: I0803 22:57:11.942964    1273 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d4d7011-2902-48df-a117-b7afc2e94916" containerName="csi-snapshotter"
	Aug 03 22:57:11 addons-110246 kubelet[1273]: I0803 22:57:11.942969    1273 memory_manager.go:354] "RemoveStaleState removing state" podUID="610d2e0a-47ed-4aa1-b767-2701c23b6276" containerName="volume-snapshot-controller"
	Aug 03 22:57:12 addons-110246 kubelet[1273]: I0803 22:57:12.064413    1273 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65r5s\" (UniqueName: \"kubernetes.io/projected/edf29210-e2de-4bb2-885a-b86e2ea89fda-kube-api-access-65r5s\") pod \"hello-world-app-6778b5fc9f-ssxwk\" (UID: \"edf29210-e2de-4bb2-885a-b86e2ea89fda\") " pod="default/hello-world-app-6778b5fc9f-ssxwk"
	Aug 03 22:57:13 addons-110246 kubelet[1273]: I0803 22:57:13.171711    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdqvc\" (UniqueName: \"kubernetes.io/projected/6a3fbc83-11d9-435d-87e5-1a494cf8c714-kube-api-access-vdqvc\") pod \"6a3fbc83-11d9-435d-87e5-1a494cf8c714\" (UID: \"6a3fbc83-11d9-435d-87e5-1a494cf8c714\") "
	Aug 03 22:57:13 addons-110246 kubelet[1273]: I0803 22:57:13.176087    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a3fbc83-11d9-435d-87e5-1a494cf8c714-kube-api-access-vdqvc" (OuterVolumeSpecName: "kube-api-access-vdqvc") pod "6a3fbc83-11d9-435d-87e5-1a494cf8c714" (UID: "6a3fbc83-11d9-435d-87e5-1a494cf8c714"). InnerVolumeSpecName "kube-api-access-vdqvc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 03 22:57:13 addons-110246 kubelet[1273]: I0803 22:57:13.272561    1273 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vdqvc\" (UniqueName: \"kubernetes.io/projected/6a3fbc83-11d9-435d-87e5-1a494cf8c714-kube-api-access-vdqvc\") on node \"addons-110246\" DevicePath \"\""
	Aug 03 22:57:13 addons-110246 kubelet[1273]: I0803 22:57:13.585492    1273 scope.go:117] "RemoveContainer" containerID="eef5f4e93a5b648d6b72399d18669693b913acdb39c8d1cf09b52750a4837594"
	Aug 03 22:57:13 addons-110246 kubelet[1273]: I0803 22:57:13.638136    1273 scope.go:117] "RemoveContainer" containerID="eef5f4e93a5b648d6b72399d18669693b913acdb39c8d1cf09b52750a4837594"
	Aug 03 22:57:13 addons-110246 kubelet[1273]: E0803 22:57:13.638809    1273 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef5f4e93a5b648d6b72399d18669693b913acdb39c8d1cf09b52750a4837594\": container with ID starting with eef5f4e93a5b648d6b72399d18669693b913acdb39c8d1cf09b52750a4837594 not found: ID does not exist" containerID="eef5f4e93a5b648d6b72399d18669693b913acdb39c8d1cf09b52750a4837594"
	Aug 03 22:57:13 addons-110246 kubelet[1273]: I0803 22:57:13.638865    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef5f4e93a5b648d6b72399d18669693b913acdb39c8d1cf09b52750a4837594"} err="failed to get container status \"eef5f4e93a5b648d6b72399d18669693b913acdb39c8d1cf09b52750a4837594\": rpc error: code = NotFound desc = could not find container \"eef5f4e93a5b648d6b72399d18669693b913acdb39c8d1cf09b52750a4837594\": container with ID starting with eef5f4e93a5b648d6b72399d18669693b913acdb39c8d1cf09b52750a4837594 not found: ID does not exist"
	Aug 03 22:57:15 addons-110246 kubelet[1273]: I0803 22:57:15.082647    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f4a5d31-3136-4d18-98c8-063d77af9778" path="/var/lib/kubelet/pods/0f4a5d31-3136-4d18-98c8-063d77af9778/volumes"
	Aug 03 22:57:15 addons-110246 kubelet[1273]: I0803 22:57:15.083152    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a3fbc83-11d9-435d-87e5-1a494cf8c714" path="/var/lib/kubelet/pods/6a3fbc83-11d9-435d-87e5-1a494cf8c714/volumes"
	Aug 03 22:57:15 addons-110246 kubelet[1273]: I0803 22:57:15.083596    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeadf2e1-6165-43cc-8a0c-d0b67486991c" path="/var/lib/kubelet/pods/eeadf2e1-6165-43cc-8a0c-d0b67486991c/volumes"
	Aug 03 22:57:17 addons-110246 kubelet[1273]: I0803 22:57:17.501362    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb2zq\" (UniqueName: \"kubernetes.io/projected/c07998a9-82f2-4874-ba2d-223974e5a260-kube-api-access-qb2zq\") pod \"c07998a9-82f2-4874-ba2d-223974e5a260\" (UID: \"c07998a9-82f2-4874-ba2d-223974e5a260\") "
	Aug 03 22:57:17 addons-110246 kubelet[1273]: I0803 22:57:17.501406    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c07998a9-82f2-4874-ba2d-223974e5a260-webhook-cert\") pod \"c07998a9-82f2-4874-ba2d-223974e5a260\" (UID: \"c07998a9-82f2-4874-ba2d-223974e5a260\") "
	Aug 03 22:57:17 addons-110246 kubelet[1273]: I0803 22:57:17.503822    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c07998a9-82f2-4874-ba2d-223974e5a260-kube-api-access-qb2zq" (OuterVolumeSpecName: "kube-api-access-qb2zq") pod "c07998a9-82f2-4874-ba2d-223974e5a260" (UID: "c07998a9-82f2-4874-ba2d-223974e5a260"). InnerVolumeSpecName "kube-api-access-qb2zq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 03 22:57:17 addons-110246 kubelet[1273]: I0803 22:57:17.510180    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c07998a9-82f2-4874-ba2d-223974e5a260-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c07998a9-82f2-4874-ba2d-223974e5a260" (UID: "c07998a9-82f2-4874-ba2d-223974e5a260"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 03 22:57:17 addons-110246 kubelet[1273]: I0803 22:57:17.602056    1273 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qb2zq\" (UniqueName: \"kubernetes.io/projected/c07998a9-82f2-4874-ba2d-223974e5a260-kube-api-access-qb2zq\") on node \"addons-110246\" DevicePath \"\""
	Aug 03 22:57:17 addons-110246 kubelet[1273]: I0803 22:57:17.602110    1273 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c07998a9-82f2-4874-ba2d-223974e5a260-webhook-cert\") on node \"addons-110246\" DevicePath \"\""
	Aug 03 22:57:17 addons-110246 kubelet[1273]: I0803 22:57:17.607501    1273 scope.go:117] "RemoveContainer" containerID="f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b"
	Aug 03 22:57:17 addons-110246 kubelet[1273]: I0803 22:57:17.629536    1273 scope.go:117] "RemoveContainer" containerID="f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b"
	Aug 03 22:57:17 addons-110246 kubelet[1273]: E0803 22:57:17.630086    1273 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b\": container with ID starting with f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b not found: ID does not exist" containerID="f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b"
	Aug 03 22:57:17 addons-110246 kubelet[1273]: I0803 22:57:17.630112    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b"} err="failed to get container status \"f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b\": rpc error: code = NotFound desc = could not find container \"f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b\": container with ID starting with f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b not found: ID does not exist"
	Aug 03 22:57:19 addons-110246 kubelet[1273]: I0803 22:57:19.081224    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c07998a9-82f2-4874-ba2d-223974e5a260" path="/var/lib/kubelet/pods/c07998a9-82f2-4874-ba2d-223974e5a260/volumes"
	
	
	==> storage-provisioner [cca6528238e5e51859a0c676bd684cca55eece8b443052df4eeebde188634715] <==
	I0803 22:50:53.972833       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0803 22:50:54.034710       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0803 22:50:54.034838       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0803 22:50:54.054681       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0803 22:50:54.054896       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-110246_29ffda3c-5de8-4822-b2d3-50fd51ed22cc!
	I0803 22:50:54.055288       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"14ab441a-6864-49e0-8517-a57d647f6b8a", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-110246_29ffda3c-5de8-4822-b2d3-50fd51ed22cc became leader
	I0803 22:50:54.155700       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-110246_29ffda3c-5de8-4822-b2d3-50fd51ed22cc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-110246 -n addons-110246
helpers_test.go:261: (dbg) Run:  kubectl --context addons-110246 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.14s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (322.54s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.624556ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-wbhpt" [bb904756-9056-4069-b53b-b35f8c0bde90] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006126889s
addons_test.go:417: (dbg) Run:  kubectl --context addons-110246 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-110246 top pods -n kube-system: exit status 1 (75.125159ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hbp7b, age: 3m23.122594514s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-110246 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-110246 top pods -n kube-system: exit status 1 (75.579695ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hbp7b, age: 3m25.568968442s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-110246 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-110246 top pods -n kube-system: exit status 1 (65.401541ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hbp7b, age: 3m28.674885461s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-110246 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-110246 top pods -n kube-system: exit status 1 (75.548558ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hbp7b, age: 3m32.487254595s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-110246 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-110246 top pods -n kube-system: exit status 1 (65.528285ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hbp7b, age: 3m38.879389079s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-110246 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-110246 top pods -n kube-system: exit status 1 (71.251808ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hbp7b, age: 3m52.151018034s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-110246 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-110246 top pods -n kube-system: exit status 1 (69.972537ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hbp7b, age: 4m8.852006135s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-110246 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-110246 top pods -n kube-system: exit status 1 (61.193236ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hbp7b, age: 4m33.534974713s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-110246 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-110246 top pods -n kube-system: exit status 1 (70.101331ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hbp7b, age: 5m23.022031249s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-110246 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-110246 top pods -n kube-system: exit status 1 (65.798777ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hbp7b, age: 6m27.826277524s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-110246 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-110246 top pods -n kube-system: exit status 1 (63.474513ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hbp7b, age: 7m57.140547814s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-110246 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-110246 top pods -n kube-system: exit status 1 (60.865059ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hbp7b, age: 8m37.840464114s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-110246 -n addons-110246
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-110246 logs -n 25: (1.318803944s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-598666                                                                     | download-only-598666 | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-308590 | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC |                     |
	|         | binary-mirror-308590                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37089                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-308590                                                                     | binary-mirror-308590 | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| addons  | enable dashboard -p                                                                         | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC |                     |
	|         | addons-110246                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC |                     |
	|         | addons-110246                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-110246 --wait=true                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:53 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:53 UTC | 03 Aug 24 22:53 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:53 UTC | 03 Aug 24 22:53 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-110246 ssh cat                                                                       | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:53 UTC | 03 Aug 24 22:53 UTC |
	|         | /opt/local-path-provisioner/pvc-35102428-567b-4022-9a55-8047dad0f959_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-110246 ip                                                                            | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | addons-110246                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | -p addons-110246                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | addons-110246                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:54 UTC |
	|         | -p addons-110246                                                                            |                      |         |         |                     |                     |
	| addons  | addons-110246 addons                                                                        | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:54 UTC | 03 Aug 24 22:55 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-110246 ssh curl -s                                                                   | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:55 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-110246 addons                                                                        | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:55 UTC | 03 Aug 24 22:55 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-110246 ip                                                                            | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:57 UTC | 03 Aug 24 22:57 UTC |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:57 UTC | 03 Aug 24 22:57 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-110246 addons disable                                                                | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:57 UTC | 03 Aug 24 22:57 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-110246 addons                                                                        | addons-110246        | jenkins | v1.33.1 | 03 Aug 24 22:59 UTC | 03 Aug 24 22:59 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 22:49:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 22:49:52.279620   18056 out.go:291] Setting OutFile to fd 1 ...
	I0803 22:49:52.279827   18056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:49:52.279835   18056 out.go:304] Setting ErrFile to fd 2...
	I0803 22:49:52.279840   18056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:49:52.279989   18056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 22:49:52.280546   18056 out.go:298] Setting JSON to false
	I0803 22:49:52.281340   18056 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1936,"bootTime":1722723456,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 22:49:52.281416   18056 start.go:139] virtualization: kvm guest
	I0803 22:49:52.283451   18056 out.go:177] * [addons-110246] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 22:49:52.285015   18056 notify.go:220] Checking for updates...
	I0803 22:49:52.285031   18056 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 22:49:52.286428   18056 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 22:49:52.287781   18056 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 22:49:52.289248   18056 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 22:49:52.290629   18056 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 22:49:52.292048   18056 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 22:49:52.293652   18056 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 22:49:52.325152   18056 out.go:177] * Using the kvm2 driver based on user configuration
	I0803 22:49:52.326571   18056 start.go:297] selected driver: kvm2
	I0803 22:49:52.326587   18056 start.go:901] validating driver "kvm2" against <nil>
	I0803 22:49:52.326609   18056 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 22:49:52.327255   18056 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 22:49:52.327320   18056 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 22:49:52.342294   18056 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 22:49:52.342338   18056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 22:49:52.342535   18056 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 22:49:52.342590   18056 cni.go:84] Creating CNI manager for ""
	I0803 22:49:52.342602   18056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 22:49:52.342611   18056 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 22:49:52.342672   18056 start.go:340] cluster config:
	{Name:addons-110246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-110246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 22:49:52.342775   18056 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 22:49:52.344508   18056 out.go:177] * Starting "addons-110246" primary control-plane node in "addons-110246" cluster
	I0803 22:49:52.346107   18056 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 22:49:52.346151   18056 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 22:49:52.346158   18056 cache.go:56] Caching tarball of preloaded images
	I0803 22:49:52.346230   18056 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 22:49:52.346240   18056 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 22:49:52.346530   18056 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/config.json ...
	I0803 22:49:52.346548   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/config.json: {Name:mk05bdfa1b646526b5412bf86d27a9b4efa97e10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:52.346674   18056 start.go:360] acquireMachinesLock for addons-110246: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 22:49:52.346717   18056 start.go:364] duration metric: took 30.605µs to acquireMachinesLock for "addons-110246"
	I0803 22:49:52.346733   18056 start.go:93] Provisioning new machine with config: &{Name:addons-110246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-110246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 22:49:52.346781   18056 start.go:125] createHost starting for "" (driver="kvm2")
	I0803 22:49:52.348505   18056 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0803 22:49:52.348626   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:49:52.348668   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:49:52.363490   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46585
	I0803 22:49:52.363977   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:49:52.364561   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:49:52.364591   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:49:52.364954   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:49:52.365146   18056 main.go:141] libmachine: (addons-110246) Calling .GetMachineName
	I0803 22:49:52.365305   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:49:52.365456   18056 start.go:159] libmachine.API.Create for "addons-110246" (driver="kvm2")
	I0803 22:49:52.365484   18056 client.go:168] LocalClient.Create starting
	I0803 22:49:52.365527   18056 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0803 22:49:52.506151   18056 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0803 22:49:52.600941   18056 main.go:141] libmachine: Running pre-create checks...
	I0803 22:49:52.600964   18056 main.go:141] libmachine: (addons-110246) Calling .PreCreateCheck
	I0803 22:49:52.601528   18056 main.go:141] libmachine: (addons-110246) Calling .GetConfigRaw
	I0803 22:49:52.601967   18056 main.go:141] libmachine: Creating machine...
	I0803 22:49:52.601981   18056 main.go:141] libmachine: (addons-110246) Calling .Create
	I0803 22:49:52.602204   18056 main.go:141] libmachine: (addons-110246) Creating KVM machine...
	I0803 22:49:52.603240   18056 main.go:141] libmachine: (addons-110246) DBG | found existing default KVM network
	I0803 22:49:52.604092   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:52.603971   18078 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012ed90}
	I0803 22:49:52.604175   18056 main.go:141] libmachine: (addons-110246) DBG | created network xml: 
	I0803 22:49:52.604200   18056 main.go:141] libmachine: (addons-110246) DBG | <network>
	I0803 22:49:52.604211   18056 main.go:141] libmachine: (addons-110246) DBG |   <name>mk-addons-110246</name>
	I0803 22:49:52.604221   18056 main.go:141] libmachine: (addons-110246) DBG |   <dns enable='no'/>
	I0803 22:49:52.604230   18056 main.go:141] libmachine: (addons-110246) DBG |   
	I0803 22:49:52.604243   18056 main.go:141] libmachine: (addons-110246) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0803 22:49:52.604251   18056 main.go:141] libmachine: (addons-110246) DBG |     <dhcp>
	I0803 22:49:52.604260   18056 main.go:141] libmachine: (addons-110246) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0803 22:49:52.604268   18056 main.go:141] libmachine: (addons-110246) DBG |     </dhcp>
	I0803 22:49:52.604273   18056 main.go:141] libmachine: (addons-110246) DBG |   </ip>
	I0803 22:49:52.604280   18056 main.go:141] libmachine: (addons-110246) DBG |   
	I0803 22:49:52.604285   18056 main.go:141] libmachine: (addons-110246) DBG | </network>
	I0803 22:49:52.604302   18056 main.go:141] libmachine: (addons-110246) DBG | 
	I0803 22:49:52.609630   18056 main.go:141] libmachine: (addons-110246) DBG | trying to create private KVM network mk-addons-110246 192.168.39.0/24...
	I0803 22:49:52.673239   18056 main.go:141] libmachine: (addons-110246) DBG | private KVM network mk-addons-110246 192.168.39.0/24 created
	I0803 22:49:52.673272   18056 main.go:141] libmachine: (addons-110246) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246 ...
	I0803 22:49:52.673285   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:52.673212   18078 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 22:49:52.673332   18056 main.go:141] libmachine: (addons-110246) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 22:49:52.673386   18056 main.go:141] libmachine: (addons-110246) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 22:49:52.931705   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:52.931599   18078 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa...
	I0803 22:49:53.077689   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:53.077587   18078 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/addons-110246.rawdisk...
	I0803 22:49:53.077730   18056 main.go:141] libmachine: (addons-110246) DBG | Writing magic tar header
	I0803 22:49:53.077744   18056 main.go:141] libmachine: (addons-110246) DBG | Writing SSH key tar header
	I0803 22:49:53.077755   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:53.077699   18078 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246 ...
	I0803 22:49:53.077840   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246
	I0803 22:49:53.077882   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0803 22:49:53.077895   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 22:49:53.077905   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0803 22:49:53.077912   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 22:49:53.077934   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home/jenkins
	I0803 22:49:53.077949   18056 main.go:141] libmachine: (addons-110246) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246 (perms=drwx------)
	I0803 22:49:53.077961   18056 main.go:141] libmachine: (addons-110246) DBG | Checking permissions on dir: /home
	I0803 22:49:53.077975   18056 main.go:141] libmachine: (addons-110246) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0803 22:49:53.077993   18056 main.go:141] libmachine: (addons-110246) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0803 22:49:53.078004   18056 main.go:141] libmachine: (addons-110246) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0803 22:49:53.078010   18056 main.go:141] libmachine: (addons-110246) DBG | Skipping /home - not owner
	I0803 22:49:53.078024   18056 main.go:141] libmachine: (addons-110246) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 22:49:53.078049   18056 main.go:141] libmachine: (addons-110246) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 22:49:53.078071   18056 main.go:141] libmachine: (addons-110246) Creating domain...
	I0803 22:49:53.079116   18056 main.go:141] libmachine: (addons-110246) define libvirt domain using xml: 
	I0803 22:49:53.079133   18056 main.go:141] libmachine: (addons-110246) <domain type='kvm'>
	I0803 22:49:53.079141   18056 main.go:141] libmachine: (addons-110246)   <name>addons-110246</name>
	I0803 22:49:53.079146   18056 main.go:141] libmachine: (addons-110246)   <memory unit='MiB'>4000</memory>
	I0803 22:49:53.079152   18056 main.go:141] libmachine: (addons-110246)   <vcpu>2</vcpu>
	I0803 22:49:53.079160   18056 main.go:141] libmachine: (addons-110246)   <features>
	I0803 22:49:53.079168   18056 main.go:141] libmachine: (addons-110246)     <acpi/>
	I0803 22:49:53.079179   18056 main.go:141] libmachine: (addons-110246)     <apic/>
	I0803 22:49:53.079206   18056 main.go:141] libmachine: (addons-110246)     <pae/>
	I0803 22:49:53.079230   18056 main.go:141] libmachine: (addons-110246)     
	I0803 22:49:53.079245   18056 main.go:141] libmachine: (addons-110246)   </features>
	I0803 22:49:53.079258   18056 main.go:141] libmachine: (addons-110246)   <cpu mode='host-passthrough'>
	I0803 22:49:53.079272   18056 main.go:141] libmachine: (addons-110246)   
	I0803 22:49:53.079297   18056 main.go:141] libmachine: (addons-110246)   </cpu>
	I0803 22:49:53.079310   18056 main.go:141] libmachine: (addons-110246)   <os>
	I0803 22:49:53.079325   18056 main.go:141] libmachine: (addons-110246)     <type>hvm</type>
	I0803 22:49:53.079337   18056 main.go:141] libmachine: (addons-110246)     <boot dev='cdrom'/>
	I0803 22:49:53.079355   18056 main.go:141] libmachine: (addons-110246)     <boot dev='hd'/>
	I0803 22:49:53.079375   18056 main.go:141] libmachine: (addons-110246)     <bootmenu enable='no'/>
	I0803 22:49:53.079389   18056 main.go:141] libmachine: (addons-110246)   </os>
	I0803 22:49:53.079400   18056 main.go:141] libmachine: (addons-110246)   <devices>
	I0803 22:49:53.079413   18056 main.go:141] libmachine: (addons-110246)     <disk type='file' device='cdrom'>
	I0803 22:49:53.079424   18056 main.go:141] libmachine: (addons-110246)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/boot2docker.iso'/>
	I0803 22:49:53.079436   18056 main.go:141] libmachine: (addons-110246)       <target dev='hdc' bus='scsi'/>
	I0803 22:49:53.079454   18056 main.go:141] libmachine: (addons-110246)       <readonly/>
	I0803 22:49:53.079467   18056 main.go:141] libmachine: (addons-110246)     </disk>
	I0803 22:49:53.079482   18056 main.go:141] libmachine: (addons-110246)     <disk type='file' device='disk'>
	I0803 22:49:53.079496   18056 main.go:141] libmachine: (addons-110246)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 22:49:53.079510   18056 main.go:141] libmachine: (addons-110246)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/addons-110246.rawdisk'/>
	I0803 22:49:53.079519   18056 main.go:141] libmachine: (addons-110246)       <target dev='hda' bus='virtio'/>
	I0803 22:49:53.079537   18056 main.go:141] libmachine: (addons-110246)     </disk>
	I0803 22:49:53.079549   18056 main.go:141] libmachine: (addons-110246)     <interface type='network'>
	I0803 22:49:53.079577   18056 main.go:141] libmachine: (addons-110246)       <source network='mk-addons-110246'/>
	I0803 22:49:53.079590   18056 main.go:141] libmachine: (addons-110246)       <model type='virtio'/>
	I0803 22:49:53.079605   18056 main.go:141] libmachine: (addons-110246)     </interface>
	I0803 22:49:53.079625   18056 main.go:141] libmachine: (addons-110246)     <interface type='network'>
	I0803 22:49:53.079640   18056 main.go:141] libmachine: (addons-110246)       <source network='default'/>
	I0803 22:49:53.079652   18056 main.go:141] libmachine: (addons-110246)       <model type='virtio'/>
	I0803 22:49:53.079667   18056 main.go:141] libmachine: (addons-110246)     </interface>
	I0803 22:49:53.079679   18056 main.go:141] libmachine: (addons-110246)     <serial type='pty'>
	I0803 22:49:53.079693   18056 main.go:141] libmachine: (addons-110246)       <target port='0'/>
	I0803 22:49:53.079751   18056 main.go:141] libmachine: (addons-110246)     </serial>
	I0803 22:49:53.079773   18056 main.go:141] libmachine: (addons-110246)     <console type='pty'>
	I0803 22:49:53.079782   18056 main.go:141] libmachine: (addons-110246)       <target type='serial' port='0'/>
	I0803 22:49:53.079790   18056 main.go:141] libmachine: (addons-110246)     </console>
	I0803 22:49:53.079801   18056 main.go:141] libmachine: (addons-110246)     <rng model='virtio'>
	I0803 22:49:53.079813   18056 main.go:141] libmachine: (addons-110246)       <backend model='random'>/dev/random</backend>
	I0803 22:49:53.079823   18056 main.go:141] libmachine: (addons-110246)     </rng>
	I0803 22:49:53.079834   18056 main.go:141] libmachine: (addons-110246)     
	I0803 22:49:53.079847   18056 main.go:141] libmachine: (addons-110246)     
	I0803 22:49:53.079859   18056 main.go:141] libmachine: (addons-110246)   </devices>
	I0803 22:49:53.079869   18056 main.go:141] libmachine: (addons-110246) </domain>
	I0803 22:49:53.079882   18056 main.go:141] libmachine: (addons-110246) 
	I0803 22:49:53.087411   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:c9:f9:f7 in network default
	I0803 22:49:53.087929   18056 main.go:141] libmachine: (addons-110246) Ensuring networks are active...
	I0803 22:49:53.087946   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:53.088497   18056 main.go:141] libmachine: (addons-110246) Ensuring network default is active
	I0803 22:49:53.088746   18056 main.go:141] libmachine: (addons-110246) Ensuring network mk-addons-110246 is active
	I0803 22:49:53.089175   18056 main.go:141] libmachine: (addons-110246) Getting domain xml...
	I0803 22:49:53.089760   18056 main.go:141] libmachine: (addons-110246) Creating domain...
	I0803 22:49:54.490903   18056 main.go:141] libmachine: (addons-110246) Waiting to get IP...
	I0803 22:49:54.491533   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:54.491900   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:54.491937   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:54.491889   18078 retry.go:31] will retry after 267.27459ms: waiting for machine to come up
	I0803 22:49:54.760252   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:54.760642   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:54.760669   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:54.760623   18078 retry.go:31] will retry after 261.053928ms: waiting for machine to come up
	I0803 22:49:55.023001   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:55.023448   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:55.023481   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:55.023387   18078 retry.go:31] will retry after 412.486886ms: waiting for machine to come up
	I0803 22:49:55.437979   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:55.438333   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:55.438360   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:55.438292   18078 retry.go:31] will retry after 434.715844ms: waiting for machine to come up
	I0803 22:49:55.874814   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:55.875239   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:55.875265   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:55.875198   18078 retry.go:31] will retry after 695.404352ms: waiting for machine to come up
	I0803 22:49:56.571963   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:56.572400   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:56.572464   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:56.572383   18078 retry.go:31] will retry after 754.799097ms: waiting for machine to come up
	I0803 22:49:57.328265   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:57.328630   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:57.328651   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:57.328585   18078 retry.go:31] will retry after 1.183910018s: waiting for machine to come up
	I0803 22:49:58.514144   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:58.514575   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:58.514602   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:58.514526   18078 retry.go:31] will retry after 896.961741ms: waiting for machine to come up
	I0803 22:49:59.412464   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:49:59.412877   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:49:59.412907   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:49:59.412834   18078 retry.go:31] will retry after 1.510555878s: waiting for machine to come up
	I0803 22:50:00.924491   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:00.924867   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:50:00.924894   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:50:00.924817   18078 retry.go:31] will retry after 1.431660453s: waiting for machine to come up
	I0803 22:50:02.358655   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:02.359160   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:50:02.359228   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:50:02.359122   18078 retry.go:31] will retry after 2.531171158s: waiting for machine to come up
	I0803 22:50:04.893392   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:04.893870   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:50:04.893891   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:50:04.893781   18078 retry.go:31] will retry after 2.446062618s: waiting for machine to come up
	I0803 22:50:07.343233   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:07.343603   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:50:07.343625   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:50:07.343552   18078 retry.go:31] will retry after 3.161483574s: waiting for machine to come up
	I0803 22:50:10.509040   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:10.509421   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find current IP address of domain addons-110246 in network mk-addons-110246
	I0803 22:50:10.509449   18056 main.go:141] libmachine: (addons-110246) DBG | I0803 22:50:10.509381   18078 retry.go:31] will retry after 4.924124516s: waiting for machine to come up
	I0803 22:50:15.437464   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.437965   18056 main.go:141] libmachine: (addons-110246) Found IP for machine: 192.168.39.9
	I0803 22:50:15.437992   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has current primary IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.437997   18056 main.go:141] libmachine: (addons-110246) Reserving static IP address...
	I0803 22:50:15.438416   18056 main.go:141] libmachine: (addons-110246) DBG | unable to find host DHCP lease matching {name: "addons-110246", mac: "52:54:00:da:10:f7", ip: "192.168.39.9"} in network mk-addons-110246
	I0803 22:50:15.510127   18056 main.go:141] libmachine: (addons-110246) DBG | Getting to WaitForSSH function...
	I0803 22:50:15.510168   18056 main.go:141] libmachine: (addons-110246) Reserved static IP address: 192.168.39.9
	I0803 22:50:15.510210   18056 main.go:141] libmachine: (addons-110246) Waiting for SSH to be available...
	I0803 22:50:15.513059   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.513478   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:15.513504   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.513613   18056 main.go:141] libmachine: (addons-110246) DBG | Using SSH client type: external
	I0803 22:50:15.513644   18056 main.go:141] libmachine: (addons-110246) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa (-rw-------)
	I0803 22:50:15.513686   18056 main.go:141] libmachine: (addons-110246) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 22:50:15.513701   18056 main.go:141] libmachine: (addons-110246) DBG | About to run SSH command:
	I0803 22:50:15.513712   18056 main.go:141] libmachine: (addons-110246) DBG | exit 0
	I0803 22:50:15.645499   18056 main.go:141] libmachine: (addons-110246) DBG | SSH cmd err, output: <nil>: 
	I0803 22:50:15.645758   18056 main.go:141] libmachine: (addons-110246) KVM machine creation complete!
	I0803 22:50:15.646256   18056 main.go:141] libmachine: (addons-110246) Calling .GetConfigRaw
	I0803 22:50:15.646784   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:15.646971   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:15.647126   18056 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 22:50:15.647142   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:15.648534   18056 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 22:50:15.648560   18056 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 22:50:15.648566   18056 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 22:50:15.648572   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:15.650728   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.651042   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:15.651063   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.651238   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:15.651405   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.651530   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.651659   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:15.651969   18056 main.go:141] libmachine: Using SSH client type: native
	I0803 22:50:15.652208   18056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0803 22:50:15.652222   18056 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 22:50:15.756654   18056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 22:50:15.756675   18056 main.go:141] libmachine: Detecting the provisioner...
	I0803 22:50:15.756686   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:15.759594   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.759951   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:15.759974   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.760112   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:15.760294   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.760429   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.760536   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:15.760699   18056 main.go:141] libmachine: Using SSH client type: native
	I0803 22:50:15.760861   18056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0803 22:50:15.760871   18056 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 22:50:15.870407   18056 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 22:50:15.870495   18056 main.go:141] libmachine: found compatible host: buildroot
	I0803 22:50:15.870509   18056 main.go:141] libmachine: Provisioning with buildroot...
	I0803 22:50:15.870522   18056 main.go:141] libmachine: (addons-110246) Calling .GetMachineName
	I0803 22:50:15.870754   18056 buildroot.go:166] provisioning hostname "addons-110246"
	I0803 22:50:15.870776   18056 main.go:141] libmachine: (addons-110246) Calling .GetMachineName
	I0803 22:50:15.870961   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:15.873637   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.873973   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:15.874123   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.874349   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:15.874578   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.874720   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.874856   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:15.874986   18056 main.go:141] libmachine: Using SSH client type: native
	I0803 22:50:15.875152   18056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0803 22:50:15.875165   18056 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-110246 && echo "addons-110246" | sudo tee /etc/hostname
	I0803 22:50:15.995884   18056 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-110246
	
	I0803 22:50:15.995911   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:15.998899   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.999354   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:15.999382   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:15.999593   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:15.999771   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:15.999933   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.000029   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.000153   18056 main.go:141] libmachine: Using SSH client type: native
	I0803 22:50:16.000377   18056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0803 22:50:16.000401   18056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-110246' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-110246/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-110246' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 22:50:16.115270   18056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 22:50:16.115303   18056 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 22:50:16.115353   18056 buildroot.go:174] setting up certificates
	I0803 22:50:16.115365   18056 provision.go:84] configureAuth start
	I0803 22:50:16.115377   18056 main.go:141] libmachine: (addons-110246) Calling .GetMachineName
	I0803 22:50:16.115713   18056 main.go:141] libmachine: (addons-110246) Calling .GetIP
	I0803 22:50:16.118425   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.118768   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.118797   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.118930   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.120988   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.121215   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.121247   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.121422   18056 provision.go:143] copyHostCerts
	I0803 22:50:16.121507   18056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 22:50:16.121656   18056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 22:50:16.121755   18056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 22:50:16.121840   18056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.addons-110246 san=[127.0.0.1 192.168.39.9 addons-110246 localhost minikube]
	I0803 22:50:16.298973   18056 provision.go:177] copyRemoteCerts
	I0803 22:50:16.299035   18056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 22:50:16.299073   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.301748   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.302096   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.302124   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.302331   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:16.302517   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.302655   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.302773   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:16.387745   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 22:50:16.413946   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0803 22:50:16.438400   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 22:50:16.463234   18056 provision.go:87] duration metric: took 347.854825ms to configureAuth
	I0803 22:50:16.463261   18056 buildroot.go:189] setting minikube options for container-runtime
	I0803 22:50:16.463456   18056 config.go:182] Loaded profile config "addons-110246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 22:50:16.463540   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.466369   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.466656   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.466684   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.466885   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:16.467063   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.467211   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.467343   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.467473   18056 main.go:141] libmachine: Using SSH client type: native
	I0803 22:50:16.467647   18056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0803 22:50:16.467666   18056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 22:50:16.744434   18056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 22:50:16.744459   18056 main.go:141] libmachine: Checking connection to Docker...
	I0803 22:50:16.744467   18056 main.go:141] libmachine: (addons-110246) Calling .GetURL
	I0803 22:50:16.745832   18056 main.go:141] libmachine: (addons-110246) DBG | Using libvirt version 6000000
	I0803 22:50:16.747946   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.748275   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.748298   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.748435   18056 main.go:141] libmachine: Docker is up and running!
	I0803 22:50:16.748454   18056 main.go:141] libmachine: Reticulating splines...
	I0803 22:50:16.748461   18056 client.go:171] duration metric: took 24.38296815s to LocalClient.Create
	I0803 22:50:16.748487   18056 start.go:167] duration metric: took 24.383031419s to libmachine.API.Create "addons-110246"
	I0803 22:50:16.748501   18056 start.go:293] postStartSetup for "addons-110246" (driver="kvm2")
	I0803 22:50:16.748517   18056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 22:50:16.748540   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:16.748778   18056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 22:50:16.748801   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.750881   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.751233   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.751253   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.751386   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:16.751577   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.751714   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.751843   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:16.835815   18056 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 22:50:16.840065   18056 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 22:50:16.840106   18056 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 22:50:16.840191   18056 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 22:50:16.840219   18056 start.go:296] duration metric: took 91.709209ms for postStartSetup
	I0803 22:50:16.840251   18056 main.go:141] libmachine: (addons-110246) Calling .GetConfigRaw
	I0803 22:50:16.840752   18056 main.go:141] libmachine: (addons-110246) Calling .GetIP
	I0803 22:50:16.843193   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.843564   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.843585   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.843807   18056 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/config.json ...
	I0803 22:50:16.844001   18056 start.go:128] duration metric: took 24.4972092s to createHost
	I0803 22:50:16.844035   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.846376   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.846681   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.846702   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.846833   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:16.847003   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.847132   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.847233   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.847342   18056 main.go:141] libmachine: Using SSH client type: native
	I0803 22:50:16.847487   18056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I0803 22:50:16.847496   18056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 22:50:16.954226   18056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722725416.929233016
	
	I0803 22:50:16.954248   18056 fix.go:216] guest clock: 1722725416.929233016
	I0803 22:50:16.954257   18056 fix.go:229] Guest: 2024-08-03 22:50:16.929233016 +0000 UTC Remote: 2024-08-03 22:50:16.844021543 +0000 UTC m=+24.597637724 (delta=85.211473ms)
	I0803 22:50:16.954304   18056 fix.go:200] guest clock delta is within tolerance: 85.211473ms
	I0803 22:50:16.954315   18056 start.go:83] releasing machines lock for "addons-110246", held for 24.607588326s
	I0803 22:50:16.954345   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:16.954614   18056 main.go:141] libmachine: (addons-110246) Calling .GetIP
	I0803 22:50:16.957009   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.957390   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.957419   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.957548   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:16.958099   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:16.958264   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:16.958375   18056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 22:50:16.958417   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.958455   18056 ssh_runner.go:195] Run: cat /version.json
	I0803 22:50:16.958476   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:16.960999   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.961093   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.961410   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.961438   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:16.961460   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.961538   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:16.961630   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:16.961918   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:16.961927   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.962119   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:16.962134   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.962274   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:16.962287   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:16.962440   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:17.059869   18056 ssh_runner.go:195] Run: systemctl --version
	I0803 22:50:17.065951   18056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 22:50:17.227303   18056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 22:50:17.233292   18056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 22:50:17.233366   18056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 22:50:17.249908   18056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 22:50:17.249928   18056 start.go:495] detecting cgroup driver to use...
	I0803 22:50:17.249999   18056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 22:50:17.269060   18056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 22:50:17.284013   18056 docker.go:217] disabling cri-docker service (if available) ...
	I0803 22:50:17.284062   18056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 22:50:17.298506   18056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 22:50:17.312700   18056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 22:50:17.432566   18056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 22:50:17.570849   18056 docker.go:233] disabling docker service ...
	I0803 22:50:17.570917   18056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 22:50:17.594541   18056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 22:50:17.607432   18056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 22:50:17.747767   18056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 22:50:17.879504   18056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 22:50:17.893501   18056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 22:50:17.912529   18056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 22:50:17.912593   18056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:17.924139   18056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 22:50:17.924214   18056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:17.935611   18056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:17.947040   18056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:17.958472   18056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 22:50:17.970049   18056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:17.980667   18056 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:17.998432   18056 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 22:50:18.009183   18056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 22:50:18.019004   18056 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 22:50:18.019069   18056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 22:50:18.032231   18056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 22:50:18.042602   18056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 22:50:18.170949   18056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 22:50:18.308146   18056 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 22:50:18.308239   18056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 22:50:18.313596   18056 start.go:563] Will wait 60s for crictl version
	I0803 22:50:18.313661   18056 ssh_runner.go:195] Run: which crictl
	I0803 22:50:18.317429   18056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 22:50:18.359076   18056 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 22:50:18.359185   18056 ssh_runner.go:195] Run: crio --version
	I0803 22:50:18.387177   18056 ssh_runner.go:195] Run: crio --version
	I0803 22:50:18.416931   18056 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 22:50:18.418436   18056 main.go:141] libmachine: (addons-110246) Calling .GetIP
	I0803 22:50:18.420808   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:18.421185   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:18.421213   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:18.421473   18056 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 22:50:18.425657   18056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 22:50:18.438848   18056 kubeadm.go:883] updating cluster {Name:addons-110246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-110246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 22:50:18.438985   18056 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 22:50:18.439046   18056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 22:50:18.475471   18056 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0803 22:50:18.475532   18056 ssh_runner.go:195] Run: which lz4
	I0803 22:50:18.479596   18056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0803 22:50:18.483843   18056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 22:50:18.483880   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0803 22:50:19.837485   18056 crio.go:462] duration metric: took 1.357914095s to copy over tarball
	I0803 22:50:19.837565   18056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 22:50:22.120178   18056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.282588751s)
	I0803 22:50:22.120205   18056 crio.go:469] duration metric: took 2.28268959s to extract the tarball
	I0803 22:50:22.120215   18056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 22:50:22.159893   18056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 22:50:22.201653   18056 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 22:50:22.201673   18056 cache_images.go:84] Images are preloaded, skipping loading
	I0803 22:50:22.201680   18056 kubeadm.go:934] updating node { 192.168.39.9 8443 v1.30.3 crio true true} ...
	I0803 22:50:22.201773   18056 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-110246 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-110246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 22:50:22.201836   18056 ssh_runner.go:195] Run: crio config
	I0803 22:50:22.246998   18056 cni.go:84] Creating CNI manager for ""
	I0803 22:50:22.247016   18056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 22:50:22.247025   18056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 22:50:22.247046   18056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.9 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-110246 NodeName:addons-110246 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 22:50:22.247175   18056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-110246"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 22:50:22.247233   18056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 22:50:22.257291   18056 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 22:50:22.257392   18056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 22:50:22.267013   18056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0803 22:50:22.283284   18056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 22:50:22.299732   18056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0803 22:50:22.316271   18056 ssh_runner.go:195] Run: grep 192.168.39.9	control-plane.minikube.internal$ /etc/hosts
	I0803 22:50:22.320106   18056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 22:50:22.332424   18056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 22:50:22.452897   18056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 22:50:22.468995   18056 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246 for IP: 192.168.39.9
	I0803 22:50:22.469015   18056 certs.go:194] generating shared ca certs ...
	I0803 22:50:22.469037   18056 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.469175   18056 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 22:50:22.610747   18056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt ...
	I0803 22:50:22.610771   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt: {Name:mk3a8f2bd1a415d1c4e7cc2b5924aceda4b639bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.610940   18056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key ...
	I0803 22:50:22.610950   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key: {Name:mk942ac2ea6bb3e011a5fa7ccb5abff5050c5a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.611019   18056 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 22:50:22.692067   18056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt ...
	I0803 22:50:22.692093   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt: {Name:mk114239716c33003f0616228c77292e17d394d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.692241   18056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key ...
	I0803 22:50:22.692251   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key: {Name:mkd0f732a980ba94cb7bfc1d30ec645ce1f371fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.692313   18056 certs.go:256] generating profile certs ...
	I0803 22:50:22.692360   18056 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.key
	I0803 22:50:22.692373   18056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt with IP's: []
	I0803 22:50:22.837428   18056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt ...
	I0803 22:50:22.837454   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: {Name:mk0b1b89c09a545a9f4c16647029f90822cacb9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.837597   18056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.key ...
	I0803 22:50:22.837607   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.key: {Name:mk607996420dfafa0c43156c772bab34637203ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.837673   18056 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.key.543fb104
	I0803 22:50:22.837689   18056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.crt.543fb104 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.9]
	I0803 22:50:22.980827   18056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.crt.543fb104 ...
	I0803 22:50:22.980861   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.crt.543fb104: {Name:mk659af5911ae73a1adfafa14713ccf0169f6bdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.981064   18056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.key.543fb104 ...
	I0803 22:50:22.981084   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.key.543fb104: {Name:mkb18322c74bac3d280e0bf809afe98698fd7659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:22.981183   18056 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.crt.543fb104 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.crt
	I0803 22:50:22.981294   18056 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.key.543fb104 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.key
	I0803 22:50:22.981388   18056 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.key
	I0803 22:50:22.981410   18056 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.crt with IP's: []
	I0803 22:50:23.095360   18056 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.crt ...
	I0803 22:50:23.095389   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.crt: {Name:mk2e07e19d0d6c4415d3afa9e4978acd9676a5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:23.095565   18056 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.key ...
	I0803 22:50:23.095579   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.key: {Name:mk6477cb65d8325d615c9080c80123d84b8d2dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:23.095765   18056 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 22:50:23.095815   18056 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 22:50:23.095848   18056 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 22:50:23.095882   18056 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 22:50:23.096465   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 22:50:23.121735   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 22:50:23.147400   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 22:50:23.206458   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 22:50:23.230738   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0803 22:50:23.260375   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 22:50:23.288146   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 22:50:23.315076   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0803 22:50:23.339413   18056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 22:50:23.364261   18056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 22:50:23.381470   18056 ssh_runner.go:195] Run: openssl version
	I0803 22:50:23.387256   18056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 22:50:23.398109   18056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 22:50:23.402996   18056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 22:50:23.403061   18056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 22:50:23.408966   18056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 22:50:23.420022   18056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 22:50:23.424337   18056 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 22:50:23.424397   18056 kubeadm.go:392] StartCluster: {Name:addons-110246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-110246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 22:50:23.424497   18056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 22:50:23.424552   18056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 22:50:23.462843   18056 cri.go:89] found id: ""
	I0803 22:50:23.462914   18056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 22:50:23.473433   18056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 22:50:23.483501   18056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 22:50:23.493273   18056 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 22:50:23.493293   18056 kubeadm.go:157] found existing configuration files:
	
	I0803 22:50:23.493331   18056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0803 22:50:23.502606   18056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 22:50:23.502655   18056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 22:50:23.512139   18056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0803 22:50:23.521443   18056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 22:50:23.521497   18056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 22:50:23.530993   18056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0803 22:50:23.540069   18056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 22:50:23.540123   18056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 22:50:23.549428   18056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0803 22:50:23.558407   18056 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 22:50:23.558475   18056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 22:50:23.567748   18056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 22:50:23.751245   18056 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 22:50:33.775833   18056 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0803 22:50:33.775916   18056 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 22:50:33.775998   18056 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 22:50:33.776106   18056 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 22:50:33.776224   18056 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 22:50:33.776317   18056 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 22:50:33.777953   18056 out.go:204]   - Generating certificates and keys ...
	I0803 22:50:33.778052   18056 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 22:50:33.778138   18056 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 22:50:33.778240   18056 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0803 22:50:33.778338   18056 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0803 22:50:33.778422   18056 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0803 22:50:33.778488   18056 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0803 22:50:33.778567   18056 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0803 22:50:33.778734   18056 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-110246 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
	I0803 22:50:33.778820   18056 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0803 22:50:33.778988   18056 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-110246 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
	I0803 22:50:33.779088   18056 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0803 22:50:33.779187   18056 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0803 22:50:33.779258   18056 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0803 22:50:33.779343   18056 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 22:50:33.779416   18056 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 22:50:33.779505   18056 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0803 22:50:33.779586   18056 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 22:50:33.779673   18056 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 22:50:33.779750   18056 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 22:50:33.779884   18056 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 22:50:33.780011   18056 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 22:50:33.781399   18056 out.go:204]   - Booting up control plane ...
	I0803 22:50:33.781510   18056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 22:50:33.781581   18056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 22:50:33.781640   18056 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 22:50:33.781738   18056 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 22:50:33.781828   18056 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 22:50:33.781871   18056 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 22:50:33.781997   18056 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0803 22:50:33.782102   18056 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0803 22:50:33.782183   18056 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.038703ms
	I0803 22:50:33.782275   18056 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0803 22:50:33.782363   18056 kubeadm.go:310] [api-check] The API server is healthy after 5.001822213s
	I0803 22:50:33.782481   18056 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 22:50:33.782646   18056 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 22:50:33.782729   18056 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 22:50:33.782954   18056 kubeadm.go:310] [mark-control-plane] Marking the node addons-110246 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 22:50:33.783031   18056 kubeadm.go:310] [bootstrap-token] Using token: 5bn30m.9lnl4t0eu1hcsdun
	I0803 22:50:33.784539   18056 out.go:204]   - Configuring RBAC rules ...
	I0803 22:50:33.784643   18056 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 22:50:33.784772   18056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 22:50:33.784920   18056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 22:50:33.785053   18056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 22:50:33.785228   18056 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 22:50:33.785344   18056 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 22:50:33.785512   18056 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 22:50:33.785569   18056 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 22:50:33.785647   18056 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 22:50:33.785655   18056 kubeadm.go:310] 
	I0803 22:50:33.785729   18056 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 22:50:33.785740   18056 kubeadm.go:310] 
	I0803 22:50:33.785844   18056 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 22:50:33.785854   18056 kubeadm.go:310] 
	I0803 22:50:33.785904   18056 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 22:50:33.785985   18056 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 22:50:33.786056   18056 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 22:50:33.786066   18056 kubeadm.go:310] 
	I0803 22:50:33.786138   18056 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 22:50:33.786150   18056 kubeadm.go:310] 
	I0803 22:50:33.786189   18056 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 22:50:33.786199   18056 kubeadm.go:310] 
	I0803 22:50:33.786246   18056 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 22:50:33.786333   18056 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 22:50:33.786428   18056 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 22:50:33.786436   18056 kubeadm.go:310] 
	I0803 22:50:33.786545   18056 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 22:50:33.786621   18056 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 22:50:33.786628   18056 kubeadm.go:310] 
	I0803 22:50:33.786722   18056 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5bn30m.9lnl4t0eu1hcsdun \
	I0803 22:50:33.786873   18056 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0803 22:50:33.786917   18056 kubeadm.go:310] 	--control-plane 
	I0803 22:50:33.786925   18056 kubeadm.go:310] 
	I0803 22:50:33.787020   18056 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 22:50:33.787033   18056 kubeadm.go:310] 
	I0803 22:50:33.787132   18056 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5bn30m.9lnl4t0eu1hcsdun \
	I0803 22:50:33.787251   18056 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0803 22:50:33.787263   18056 cni.go:84] Creating CNI manager for ""
	I0803 22:50:33.787273   18056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 22:50:33.788702   18056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 22:50:33.789994   18056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 22:50:33.801248   18056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 22:50:33.820227   18056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 22:50:33.820319   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:33.820347   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-110246 minikube.k8s.io/updated_at=2024_08_03T22_50_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=addons-110246 minikube.k8s.io/primary=true
	I0803 22:50:33.942895   18056 ops.go:34] apiserver oom_adj: -16
	I0803 22:50:33.942962   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:34.443378   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:34.943961   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:35.443969   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:35.943075   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:36.443830   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:36.943319   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:37.443623   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:37.943307   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:38.443128   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:38.943962   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:39.443289   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:39.943316   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:40.443110   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:40.943415   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:41.443720   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:41.943021   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:42.443968   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:42.943900   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:43.443700   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:43.944013   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:44.443681   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:44.943834   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:45.443205   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:45.943593   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:46.443163   18056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:46.536987   18056 kubeadm.go:1113] duration metric: took 12.71673293s to wait for elevateKubeSystemPrivileges
	I0803 22:50:46.537024   18056 kubeadm.go:394] duration metric: took 23.112631323s to StartCluster
	I0803 22:50:46.537045   18056 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:46.537178   18056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 22:50:46.537652   18056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:46.537867   18056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0803 22:50:46.537886   18056 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 22:50:46.537953   18056 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0803 22:50:46.538044   18056 addons.go:69] Setting yakd=true in profile "addons-110246"
	I0803 22:50:46.538066   18056 addons.go:69] Setting inspektor-gadget=true in profile "addons-110246"
	I0803 22:50:46.538082   18056 addons.go:69] Setting metrics-server=true in profile "addons-110246"
	I0803 22:50:46.538100   18056 addons.go:234] Setting addon metrics-server=true in "addons-110246"
	I0803 22:50:46.538093   18056 addons.go:69] Setting gcp-auth=true in profile "addons-110246"
	I0803 22:50:46.538103   18056 config.go:182] Loaded profile config "addons-110246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 22:50:46.538114   18056 addons.go:69] Setting helm-tiller=true in profile "addons-110246"
	I0803 22:50:46.538124   18056 mustload.go:65] Loading cluster: addons-110246
	I0803 22:50:46.538130   18056 addons.go:234] Setting addon helm-tiller=true in "addons-110246"
	I0803 22:50:46.538135   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538106   18056 addons.go:234] Setting addon inspektor-gadget=true in "addons-110246"
	I0803 22:50:46.538161   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538165   18056 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-110246"
	I0803 22:50:46.538182   18056 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-110246"
	I0803 22:50:46.538204   18056 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-110246"
	I0803 22:50:46.538225   18056 addons.go:69] Setting registry=true in profile "addons-110246"
	I0803 22:50:46.538236   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538237   18056 addons.go:69] Setting cloud-spanner=true in profile "addons-110246"
	I0803 22:50:46.538258   18056 addons.go:234] Setting addon cloud-spanner=true in "addons-110246"
	I0803 22:50:46.538260   18056 addons.go:234] Setting addon registry=true in "addons-110246"
	I0803 22:50:46.538285   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538285   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538375   18056 addons.go:69] Setting volcano=true in profile "addons-110246"
	I0803 22:50:46.538402   18056 addons.go:234] Setting addon volcano=true in "addons-110246"
	I0803 22:50:46.538432   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538578   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538590   18056 addons.go:69] Setting ingress=true in profile "addons-110246"
	I0803 22:50:46.538593   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538608   18056 addons.go:69] Setting volumesnapshots=true in profile "addons-110246"
	I0803 22:50:46.538614   18056 addons.go:234] Setting addon ingress=true in "addons-110246"
	I0803 22:50:46.538615   18056 addons.go:69] Setting default-storageclass=true in profile "addons-110246"
	I0803 22:50:46.538625   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538632   18056 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-110246"
	I0803 22:50:46.538641   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538641   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538649   18056 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-110246"
	I0803 22:50:46.538662   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538230   18056 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-110246"
	I0803 22:50:46.538640   18056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-110246"
	I0803 22:50:46.538675   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538685   18056 addons.go:69] Setting ingress-dns=true in profile "addons-110246"
	I0803 22:50:46.538703   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538703   18056 addons.go:234] Setting addon ingress-dns=true in "addons-110246"
	I0803 22:50:46.538168   18056 addons.go:69] Setting storage-provisioner=true in profile "addons-110246"
	I0803 22:50:46.538077   18056 addons.go:234] Setting addon yakd=true in "addons-110246"
	I0803 22:50:46.538580   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538626   18056 addons.go:234] Setting addon volumesnapshots=true in "addons-110246"
	I0803 22:50:46.538759   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538765   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538742   18056 addons.go:234] Setting addon storage-provisioner=true in "addons-110246"
	I0803 22:50:46.538795   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538811   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538819   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538616   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538961   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.538974   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.538978   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.538986   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.539022   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.539038   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.539080   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.539096   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.539145   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.539171   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.539267   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.539365   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.539388   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.539389   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.539416   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.539423   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.539433   18056 config.go:182] Loaded profile config "addons-110246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 22:50:46.539293   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.539485   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.539825   18056 out.go:177] * Verifying Kubernetes components...
	I0803 22:50:46.542803   18056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 22:50:46.559466   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0803 22:50:46.559862   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0803 22:50:46.559970   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40101
	I0803 22:50:46.560106   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.560113   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38417
	I0803 22:50:46.560325   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.560435   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.560543   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43849
	I0803 22:50:46.560783   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.560793   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.560901   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.560911   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.560962   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.561344   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.561451   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.561472   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.561502   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.561521   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.561576   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.561888   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.561921   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.562382   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.562433   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.569798   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.569846   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.570139   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.570182   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.571711   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.571738   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.571899   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43383
	I0803 22:50:46.571910   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.571931   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.572052   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.572539   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.572568   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.577758   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.577779   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.577872   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.577943   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.577965   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.579323   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.579329   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.579375   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.579853   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.580010   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.580051   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.580447   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.580486   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.605489   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0803 22:50:46.606239   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.606832   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.606866   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.607212   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.607825   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.607874   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.608086   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40405
	I0803 22:50:46.608103   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34991
	I0803 22:50:46.608667   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.609091   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.609108   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.609493   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.609733   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.610815   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.611518   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.611541   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.611971   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.612592   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.614182   18056 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-110246"
	I0803 22:50:46.614229   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.614430   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.614594   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.614631   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.615785   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33839
	I0803 22:50:46.616202   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.616654   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.616675   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.616914   18056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 22:50:46.616981   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.617143   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.617501   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34929
	I0803 22:50:46.617894   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.618700   18056 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 22:50:46.618720   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 22:50:46.618737   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.618773   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.619820   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.619842   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.620400   18056 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0803 22:50:46.620919   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.621447   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0803 22:50:46.621476   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.621862   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.622147   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.622303   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.622317   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.622627   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.622645   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.622917   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.623086   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.623119   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.623238   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.623515   18056 out.go:177]   - Using image docker.io/registry:2.8.3
	I0803 22:50:46.623656   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.623676   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.623964   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.624682   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0803 22:50:46.625118   18056 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0803 22:50:46.625134   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0803 22:50:46.625150   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.626301   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0803 22:50:46.626411   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40307
	I0803 22:50:46.627065   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.627154   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.627315   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33153
	I0803 22:50:46.627567   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.627579   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.627777   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.627793   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.627810   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.628761   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.628778   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.628801   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.628812   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36571
	I0803 22:50:46.628828   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.629892   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.629915   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.629916   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.629992   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.630291   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.630313   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.630345   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.630774   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.630791   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.630991   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.631024   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.631277   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.631652   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34199
	I0803 22:50:46.631823   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34661
	I0803 22:50:46.632032   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.632067   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.632301   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.632328   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.632349   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.632477   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.632710   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.632728   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.632783   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.632922   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.633084   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.633620   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.633793   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.634695   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.634848   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.634859   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.635153   18056 addons.go:234] Setting addon default-storageclass=true in "addons-110246"
	I0803 22:50:46.635185   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.635296   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.635418   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.635482   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I0803 22:50:46.635517   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.635546   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.635720   18056 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0803 22:50:46.636200   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.636232   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.636830   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.637050   18056 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0803 22:50:46.637067   18056 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0803 22:50:46.637092   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.637232   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.637244   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.637922   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.637988   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.638995   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.639020   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.639523   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0803 22:50:46.640688   18056 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0803 22:50:46.640707   18056 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0803 22:50:46.640727   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.640794   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.640823   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.640917   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.640942   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.640965   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.642181   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.642211   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.642253   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.642631   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.642796   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.642990   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.643586   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:46.644156   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.644185   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.644498   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.644952   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.644980   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.645189   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.645336   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.645514   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.645644   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.648045   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0803 22:50:46.648463   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.648954   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.648971   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.649333   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.649583   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.651699   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.653546   18056 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0803 22:50:46.654993   18056 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0803 22:50:46.655012   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0803 22:50:46.655029   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.658747   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39327
	I0803 22:50:46.659079   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.659172   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.659685   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.659702   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.660068   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.660118   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.660138   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.660329   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.660848   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.661087   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.661307   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.661385   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37653
	I0803 22:50:46.661689   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.661876   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.662462   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.662482   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.662797   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.662969   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.663096   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.664544   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.665069   18056 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0803 22:50:46.665075   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0803 22:50:46.665600   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.666105   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.666127   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.666168   18056 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0803 22:50:46.666225   18056 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0803 22:50:46.666248   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0803 22:50:46.666267   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.666484   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.666714   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.667226   18056 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0803 22:50:46.667246   18056 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0803 22:50:46.667274   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.669630   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
	I0803 22:50:46.670156   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43527
	I0803 22:50:46.670655   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.670674   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.670689   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.671052   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.671072   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.671237   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.671375   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.671511   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.671522   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.671720   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.671785   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.671836   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.671883   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.671898   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.671968   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.672499   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.672746   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.673123   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.673279   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.673291   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.673340   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.673439   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.673479   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.673478   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.673933   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.674106   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.674314   18056 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0803 22:50:46.675569   18056 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0803 22:50:46.675586   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0803 22:50:46.675601   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.676557   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.678749   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0803 22:50:46.679226   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.679832   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.679851   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.679888   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.680072   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.680218   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.680351   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.681837   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0803 22:50:46.683077   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0803 22:50:46.684840   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0803 22:50:46.686225   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0803 22:50:46.686913   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38611
	I0803 22:50:46.687461   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.688434   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36821
	I0803 22:50:46.688653   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0803 22:50:46.689061   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43111
	I0803 22:50:46.689659   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.689927   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.690022   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.689945   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.690223   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.690248   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.690370   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0803 22:50:46.690733   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.690755   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.690810   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.690870   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.690966   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.691010   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.691451   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.691467   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.691798   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0803 22:50:46.693110   18056 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0803 22:50:46.693189   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.693561   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37391
	I0803 22:50:46.693566   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33447
	I0803 22:50:46.693587   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.693648   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.694000   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.694065   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.694468   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.694790   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:46.694871   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0803 22:50:46.694889   18056 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0803 22:50:46.694924   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:46.695255   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.695274   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.695391   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.695406   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.695801   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.695839   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.695861   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.695840   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.695884   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.696018   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.696065   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.696187   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:46.696199   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:46.696611   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:46.696642   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:46.696650   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:46.696659   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:46.696666   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:46.696928   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:46.696939   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:46.696950   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	W0803 22:50:46.697144   18056 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0803 22:50:46.697965   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.698032   18056 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0803 22:50:46.699011   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.699241   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.699709   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.699730   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.699877   18056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0803 22:50:46.699951   18056 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0803 22:50:46.700277   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0803 22:50:46.700296   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.700004   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.700506   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.700696   18056 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0803 22:50:46.700772   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.702257   18056 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0803 22:50:46.702274   18056 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0803 22:50:46.702294   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.702369   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.702909   18056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0803 22:50:46.704252   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.704277   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.704293   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.704319   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0803 22:50:46.704437   18056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0803 22:50:46.704568   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.704753   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.704836   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.704897   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.705019   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.706031   18056 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0803 22:50:46.706114   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0803 22:50:46.706133   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.706321   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.706409   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.706428   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.706694   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.706714   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.707067   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.707229   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.707375   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.707506   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.707942   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.708422   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.708948   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.709451   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.709468   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	W0803 22:50:46.710030   18056 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54746->192.168.39.9:22: read: connection reset by peer
	I0803 22:50:46.710055   18056 retry.go:31] will retry after 309.264282ms: ssh: handshake failed: read tcp 192.168.39.1:54746->192.168.39.9:22: read: connection reset by peer
	I0803 22:50:46.710094   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.710199   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.710377   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.710396   18056 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 22:50:46.710407   18056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 22:50:46.710420   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.710563   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.710698   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.713100   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.713502   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.713521   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.713678   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.713817   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.713922   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.714023   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:46.718794   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I0803 22:50:46.719162   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:46.719576   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:46.719597   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:46.719945   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:46.720137   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:46.721547   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:46.723473   18056 out.go:177]   - Using image docker.io/busybox:stable
	I0803 22:50:46.724748   18056 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0803 22:50:46.725945   18056 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0803 22:50:46.725957   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0803 22:50:46.725972   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:46.729224   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.729637   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:46.729666   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:46.729795   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:46.730001   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:46.730217   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:46.730373   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:47.031046   18056 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0803 22:50:47.031069   18056 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0803 22:50:47.050327   18056 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0803 22:50:47.050344   18056 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0803 22:50:47.115233   18056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0803 22:50:47.115271   18056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 22:50:47.123103   18056 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0803 22:50:47.123123   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0803 22:50:47.132170   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0803 22:50:47.138242   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 22:50:47.190948   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0803 22:50:47.191778   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0803 22:50:47.260702   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0803 22:50:47.304212   18056 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0803 22:50:47.304247   18056 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0803 22:50:47.322792   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0803 22:50:47.322828   18056 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0803 22:50:47.322851   18056 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0803 22:50:47.322872   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0803 22:50:47.341924   18056 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0803 22:50:47.341950   18056 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0803 22:50:47.356450   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0803 22:50:47.359303   18056 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0803 22:50:47.359327   18056 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0803 22:50:47.401479   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 22:50:47.407700   18056 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0803 22:50:47.407730   18056 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0803 22:50:47.559239   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0803 22:50:47.559271   18056 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0803 22:50:47.585644   18056 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0803 22:50:47.585675   18056 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0803 22:50:47.590738   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0803 22:50:47.593508   18056 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0803 22:50:47.593527   18056 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0803 22:50:47.623248   18056 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 22:50:47.623275   18056 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0803 22:50:47.663407   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0803 22:50:47.756646   18056 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0803 22:50:47.756674   18056 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0803 22:50:47.801339   18056 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0803 22:50:47.801385   18056 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0803 22:50:47.816655   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0803 22:50:47.816680   18056 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0803 22:50:47.839394   18056 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0803 22:50:47.839419   18056 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0803 22:50:47.903125   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 22:50:47.954230   18056 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0803 22:50:47.954263   18056 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0803 22:50:48.012148   18056 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0803 22:50:48.012179   18056 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0803 22:50:48.067624   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0803 22:50:48.067656   18056 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0803 22:50:48.106363   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0803 22:50:48.106399   18056 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0803 22:50:48.226957   18056 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0803 22:50:48.226988   18056 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0803 22:50:48.359269   18056 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0803 22:50:48.359296   18056 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0803 22:50:48.429013   18056 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0803 22:50:48.429045   18056 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0803 22:50:48.437518   18056 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0803 22:50:48.437545   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0803 22:50:48.738562   18056 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0803 22:50:48.738593   18056 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0803 22:50:48.803239   18056 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0803 22:50:48.803262   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0803 22:50:48.861268   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0803 22:50:48.891643   18056 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0803 22:50:48.891670   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0803 22:50:49.103985   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0803 22:50:49.139682   18056 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0803 22:50:49.139702   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0803 22:50:49.226850   18056 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0803 22:50:49.226884   18056 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0803 22:50:49.347390   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0803 22:50:49.521310   18056 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0803 22:50:49.521343   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0803 22:50:49.577054   18056 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.461783796s)
	I0803 22:50:49.577095   18056 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.461796975s)
	I0803 22:50:49.577097   18056 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0803 22:50:49.578077   18056 node_ready.go:35] waiting up to 6m0s for node "addons-110246" to be "Ready" ...
	I0803 22:50:49.581337   18056 node_ready.go:49] node "addons-110246" has status "Ready":"True"
	I0803 22:50:49.581378   18056 node_ready.go:38] duration metric: took 3.257535ms for node "addons-110246" to be "Ready" ...
	I0803 22:50:49.581390   18056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 22:50:49.596429   18056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8hx7t" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:49.805408   18056 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0803 22:50:49.805432   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0803 22:50:50.102281   18056 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-110246" context rescaled to 1 replicas
	I0803 22:50:50.195404   18056 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0803 22:50:50.195434   18056 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0803 22:50:50.270052   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.137843129s)
	I0803 22:50:50.270106   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:50.270119   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:50.270453   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:50.270466   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:50.270474   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:50.270488   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:50.270496   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:50.270724   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:50.270759   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:50.432455   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0803 22:50:51.658760   18056 pod_ready.go:102] pod "coredns-7db6d8ff4d-8hx7t" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:51.886465   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.748182787s)
	I0803 22:50:51.886524   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:51.886540   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:51.887388   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:51.887406   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:51.887428   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:51.887437   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:51.887673   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:51.887691   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:52.648319   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.45733094s)
	I0803 22:50:52.648368   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:52.648381   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:52.648382   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.456551146s)
	I0803 22:50:52.648425   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:52.648441   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:52.648622   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:52.648640   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:52.648650   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:52.648663   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:52.648683   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:52.648726   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:52.648748   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:52.648755   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:52.648764   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:52.648771   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:52.648895   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:52.648908   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:52.649132   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:52.649157   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:52.649177   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:52.667778   18056 pod_ready.go:92] pod "coredns-7db6d8ff4d-8hx7t" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:52.667801   18056 pod_ready.go:81] duration metric: took 3.071339238s for pod "coredns-7db6d8ff4d-8hx7t" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.667814   18056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hbp7b" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.699843   18056 pod_ready.go:92] pod "coredns-7db6d8ff4d-hbp7b" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:52.699864   18056 pod_ready.go:81] duration metric: took 32.042317ms for pod "coredns-7db6d8ff4d-hbp7b" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.699876   18056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.701103   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:52.701123   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:52.701397   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:52.701446   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:52.701458   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:52.721977   18056 pod_ready.go:92] pod "etcd-addons-110246" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:52.722000   18056 pod_ready.go:81] duration metric: took 22.116795ms for pod "etcd-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.722013   18056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.728394   18056 pod_ready.go:92] pod "kube-apiserver-addons-110246" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:52.728415   18056 pod_ready.go:81] duration metric: took 6.393731ms for pod "kube-apiserver-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.728426   18056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.735430   18056 pod_ready.go:92] pod "kube-controller-manager-addons-110246" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:52.735452   18056 pod_ready.go:81] duration metric: took 7.018737ms for pod "kube-controller-manager-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:52.735463   18056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lfl9m" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:53.013986   18056 pod_ready.go:92] pod "kube-proxy-lfl9m" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:53.014007   18056 pod_ready.go:81] duration metric: took 278.536554ms for pod "kube-proxy-lfl9m" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:53.014016   18056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:53.401659   18056 pod_ready.go:92] pod "kube-scheduler-addons-110246" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:53.401692   18056 pod_ready.go:81] duration metric: took 387.668627ms for pod "kube-scheduler-addons-110246" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:53.401706   18056 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:53.703925   18056 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0803 22:50:53.703968   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:53.707094   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:53.707557   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:53.707585   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:53.707772   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:53.707993   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:53.708176   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:53.708364   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:54.110346   18056 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0803 22:50:54.250832   18056 addons.go:234] Setting addon gcp-auth=true in "addons-110246"
	I0803 22:50:54.250901   18056 host.go:66] Checking if "addons-110246" exists ...
	I0803 22:50:54.251355   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:54.251401   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:54.266999   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0803 22:50:54.267436   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:54.267969   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:54.267987   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:54.268319   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:54.268909   18056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 22:50:54.268940   18056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 22:50:54.284039   18056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44385
	I0803 22:50:54.284489   18056 main.go:141] libmachine: () Calling .GetVersion
	I0803 22:50:54.284961   18056 main.go:141] libmachine: Using API Version  1
	I0803 22:50:54.284981   18056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 22:50:54.285339   18056 main.go:141] libmachine: () Calling .GetMachineName
	I0803 22:50:54.285529   18056 main.go:141] libmachine: (addons-110246) Calling .GetState
	I0803 22:50:54.287242   18056 main.go:141] libmachine: (addons-110246) Calling .DriverName
	I0803 22:50:54.287464   18056 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0803 22:50:54.287485   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHHostname
	I0803 22:50:54.290517   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:54.290985   18056 main.go:141] libmachine: (addons-110246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:10:f7", ip: ""} in network mk-addons-110246: {Iface:virbr1 ExpiryTime:2024-08-03 23:50:07 +0000 UTC Type:0 Mac:52:54:00:da:10:f7 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-110246 Clientid:01:52:54:00:da:10:f7}
	I0803 22:50:54.291012   18056 main.go:141] libmachine: (addons-110246) DBG | domain addons-110246 has defined IP address 192.168.39.9 and MAC address 52:54:00:da:10:f7 in network mk-addons-110246
	I0803 22:50:54.291199   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHPort
	I0803 22:50:54.291364   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHKeyPath
	I0803 22:50:54.291525   18056 main.go:141] libmachine: (addons-110246) Calling .GetSSHUsername
	I0803 22:50:54.291653   18056 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/addons-110246/id_rsa Username:docker}
	I0803 22:50:55.430178   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:55.930942   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.574454564s)
	I0803 22:50:55.930991   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.930997   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.529485031s)
	I0803 22:50:55.931007   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.670270203s)
	I0803 22:50:55.931003   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931047   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931066   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931073   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.340308726s)
	I0803 22:50:55.931091   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931100   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931036   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931159   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931163   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.267719265s)
	I0803 22:50:55.931220   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931230   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931253   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.028094381s)
	I0803 22:50:55.931271   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931280   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931411   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.070107313s)
	I0803 22:50:55.931436   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	W0803 22:50:55.931438   18056 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0803 22:50:55.931457   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.931458   18056 retry.go:31] will retry after 178.800834ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0803 22:50:55.931486   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.931494   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.931502   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931509   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931511   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.827498503s)
	I0803 22:50:55.931529   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931538   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931604   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.931626   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.931647   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.931658   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.931672   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.931672   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.931680   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931683   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931687   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931691   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931736   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.931656   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.584223067s)
	I0803 22:50:55.931758   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.931765   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.931765   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931772   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.931776   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931779   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.931820   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.931838   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.931845   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.931932   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.932020   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.932043   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.932053   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.932053   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.932062   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.932065   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933220   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.933248   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933256   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933265   18056 addons.go:475] Verifying addon metrics-server=true in "addons-110246"
	I0803 22:50:55.933310   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.933333   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933340   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933348   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.933371   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.933665   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.933699   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933709   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933718   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933726   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.933743   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.933710   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933784   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.933790   18056 addons.go:475] Verifying addon registry=true in "addons-110246"
	I0803 22:50:55.933808   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933818   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933923   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.933946   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933953   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933974   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.933983   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.933993   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.934005   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.934257   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.934292   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.934304   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.934882   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:55.934920   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.934927   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:55.934935   18056 addons.go:475] Verifying addon ingress=true in "addons-110246"
	I0803 22:50:55.935597   18056 out.go:177] * Verifying registry addon...
	I0803 22:50:55.936797   18056 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-110246 service yakd-dashboard -n yakd-dashboard
	
	I0803 22:50:55.936822   18056 out.go:177] * Verifying ingress addon...
	I0803 22:50:55.938286   18056 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0803 22:50:55.939438   18056 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0803 22:50:55.953898   18056 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0803 22:50:55.953921   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:55.956875   18056 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0803 22:50:55.956897   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:55.961832   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:55.961865   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:55.962124   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:55.962139   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:56.110743   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0803 22:50:56.443357   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:56.444903   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:56.945230   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:56.958294   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:57.159217   18056 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.871728882s)
	I0803 22:50:57.159213   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.726696805s)
	I0803 22:50:57.159394   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:57.159412   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:57.159650   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:57.159667   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:57.159678   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:57.159686   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:57.160689   18056 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0803 22:50:57.161415   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:57.161423   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:57.161438   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:57.161453   18056 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-110246"
	I0803 22:50:57.163354   18056 out.go:177] * Verifying csi-hostpath-driver addon...
	I0803 22:50:57.163354   18056 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0803 22:50:57.164519   18056 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0803 22:50:57.164538   18056 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0803 22:50:57.165159   18056 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0803 22:50:57.197180   18056 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0803 22:50:57.197208   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:57.234262   18056 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0803 22:50:57.234287   18056 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0803 22:50:57.324594   18056 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0803 22:50:57.324620   18056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0803 22:50:57.395096   18056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0803 22:50:57.443656   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:57.445903   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:57.714708   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:57.908366   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:57.945198   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:57.945437   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:58.074055   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.963259971s)
	I0803 22:50:58.074112   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:58.074125   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:58.074459   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:58.074498   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:58.074512   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:58.074520   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:58.074726   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:58.074767   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:58.074784   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:58.171648   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:58.443050   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:58.444813   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:58.678164   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:58.868756   18056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.473604857s)
	I0803 22:50:58.868836   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:58.868852   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:58.869151   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:58.869217   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:58.869180   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:58.869233   18056 main.go:141] libmachine: Making call to close driver server
	I0803 22:50:58.869242   18056 main.go:141] libmachine: (addons-110246) Calling .Close
	I0803 22:50:58.869480   18056 main.go:141] libmachine: Successfully made call to close driver server
	I0803 22:50:58.869494   18056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 22:50:58.869515   18056 main.go:141] libmachine: (addons-110246) DBG | Closing plugin on server side
	I0803 22:50:58.871488   18056 addons.go:475] Verifying addon gcp-auth=true in "addons-110246"
	I0803 22:50:58.874374   18056 out.go:177] * Verifying gcp-auth addon...
	I0803 22:50:58.876478   18056 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0803 22:50:58.891057   18056 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0803 22:50:58.891079   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:50:58.943318   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:58.945270   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:59.172976   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:59.382909   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:50:59.444544   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:59.446049   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:59.670963   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:59.880922   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:50:59.916852   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:59.946508   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:59.947434   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:00.170483   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:00.380361   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:00.444868   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:00.445270   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:00.671773   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:00.880567   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:00.944374   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:00.944759   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:01.171397   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:01.380229   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:01.444835   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:01.445094   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:01.671142   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:01.880717   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:01.945407   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:01.946233   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:02.173727   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:02.379998   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:02.408031   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:51:02.446678   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:02.451003   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:02.671939   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:02.880704   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:02.944023   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:02.946132   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:03.171420   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:03.380794   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:03.444828   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:03.447477   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:03.738368   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:03.880818   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:03.943101   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:03.944533   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:04.178230   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:04.381557   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:04.443509   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:04.446659   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:04.671441   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:04.880373   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:04.908281   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:51:04.945103   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:04.947414   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:05.171653   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:05.380028   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:05.444644   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:05.446622   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:05.670339   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:05.880352   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:05.944035   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:05.945587   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:06.172517   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:06.380781   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:06.442908   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:06.442969   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:06.671164   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:06.880314   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:06.943426   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:06.943723   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:07.170796   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:07.380760   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:07.408522   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:51:07.442876   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:07.442986   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:07.670980   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:07.881280   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:07.948026   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:07.948170   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:08.171056   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:08.380665   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:08.445539   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:08.446474   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:08.671521   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:08.880814   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:08.943490   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:08.945006   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:09.175147   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:09.380254   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:09.443061   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:09.444284   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:09.671573   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:09.879670   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:09.911373   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:51:09.945204   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:09.945214   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:10.171583   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:10.380744   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:10.443998   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:10.445624   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:10.670168   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:10.879616   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:10.945813   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:10.946060   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:11.172277   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:11.381588   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:11.443133   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:11.448632   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:11.670560   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:11.880655   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:11.944353   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:11.944874   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:12.170198   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:12.380416   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:12.408927   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:51:12.443311   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:12.443377   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:12.671638   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:12.880116   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:12.952112   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:12.952545   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:13.171647   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:13.380820   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:13.444649   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:13.444991   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:13.671461   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:13.880988   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:14.149800   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:14.151050   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:14.170512   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:14.380151   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:14.444861   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:14.450098   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:14.670776   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:14.880614   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:14.907930   18056 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"False"
	I0803 22:51:14.944266   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:14.953090   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:15.172676   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:15.380865   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:15.409969   18056 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace has status "Ready":"True"
	I0803 22:51:15.409995   18056 pod_ready.go:81] duration metric: took 22.008279196s for pod "nvidia-device-plugin-daemonset-f6gv6" in "kube-system" namespace to be "Ready" ...
	I0803 22:51:15.410003   18056 pod_ready.go:38] duration metric: took 25.828596629s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 22:51:15.410018   18056 api_server.go:52] waiting for apiserver process to appear ...
	I0803 22:51:15.410063   18056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 22:51:15.430878   18056 api_server.go:72] duration metric: took 28.892957799s to wait for apiserver process to appear ...
	I0803 22:51:15.430919   18056 api_server.go:88] waiting for apiserver healthz status ...
	I0803 22:51:15.430943   18056 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I0803 22:51:15.435043   18056 api_server.go:279] https://192.168.39.9:8443/healthz returned 200:
	ok
	I0803 22:51:15.436193   18056 api_server.go:141] control plane version: v1.30.3
	I0803 22:51:15.436213   18056 api_server.go:131] duration metric: took 5.28654ms to wait for apiserver health ...
	I0803 22:51:15.436219   18056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 22:51:15.443740   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:15.444137   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:15.444616   18056 system_pods.go:59] 18 kube-system pods found
	I0803 22:51:15.444637   18056 system_pods.go:61] "coredns-7db6d8ff4d-hbp7b" [f9309e8e-3027-46d2-b989-2f285fcf10f4] Running
	I0803 22:51:15.444646   18056 system_pods.go:61] "csi-hostpath-attacher-0" [d5c3e8a0-1571-4ee3-a3cb-c726b1bddccb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0803 22:51:15.444669   18056 system_pods.go:61] "csi-hostpath-resizer-0" [aa05ea21-0c03-4cc5-ba5d-4ef7dcce50b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0803 22:51:15.444682   18056 system_pods.go:61] "csi-hostpathplugin-cnwdb" [8d4d7011-2902-48df-a117-b7afc2e94916] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0803 22:51:15.444688   18056 system_pods.go:61] "etcd-addons-110246" [9586d714-7768-4e6c-93c7-1525119eef59] Running
	I0803 22:51:15.444694   18056 system_pods.go:61] "kube-apiserver-addons-110246" [5c7dc265-a4c7-4dfb-919d-cf428fcf1674] Running
	I0803 22:51:15.444698   18056 system_pods.go:61] "kube-controller-manager-addons-110246" [d568c53e-2834-4902-888b-b1627f65e978] Running
	I0803 22:51:15.444704   18056 system_pods.go:61] "kube-ingress-dns-minikube" [6a3fbc83-11d9-435d-87e5-1a494cf8c714] Running
	I0803 22:51:15.444707   18056 system_pods.go:61] "kube-proxy-lfl9m" [77bd9bb9-4577-4a8c-bdd2-970a32e4467b] Running
	I0803 22:51:15.444711   18056 system_pods.go:61] "kube-scheduler-addons-110246" [bbba425e-6b27-4154-81f2-3e80e941f607] Running
	I0803 22:51:15.444717   18056 system_pods.go:61] "metrics-server-c59844bb4-wbhpt" [bb904756-9056-4069-b53b-b35f8c0bde90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0803 22:51:15.444724   18056 system_pods.go:61] "nvidia-device-plugin-daemonset-f6gv6" [5d7278f7-553b-40c0-a2b4-059ba877ae75] Running
	I0803 22:51:15.444730   18056 system_pods.go:61] "registry-698f998955-4bhmt" [d9661cee-e4cd-468d-a421-0e709c62e138] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0803 22:51:15.444737   18056 system_pods.go:61] "registry-proxy-4sg2g" [df0da2d6-2cf2-471c-9b29-c471d61d67b5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0803 22:51:15.444746   18056 system_pods.go:61] "snapshot-controller-745499f584-8t6hx" [66934af4-c7e5-4ec2-a4c0-983cc9acc894] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0803 22:51:15.444758   18056 system_pods.go:61] "snapshot-controller-745499f584-pgmqb" [610d2e0a-47ed-4aa1-b767-2701c23b6276] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0803 22:51:15.444768   18056 system_pods.go:61] "storage-provisioner" [4abb12c4-8b99-40af-8da9-1f36ecb668a0] Running
	I0803 22:51:15.444779   18056 system_pods.go:61] "tiller-deploy-6677d64bcd-zv5cc" [479ff6dd-8760-4dec-8f87-d1236801993f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0803 22:51:15.444787   18056 system_pods.go:74] duration metric: took 8.562356ms to wait for pod list to return data ...
	I0803 22:51:15.444796   18056 default_sa.go:34] waiting for default service account to be created ...
	I0803 22:51:15.446434   18056 default_sa.go:45] found service account: "default"
	I0803 22:51:15.446448   18056 default_sa.go:55] duration metric: took 1.646743ms for default service account to be created ...
	I0803 22:51:15.446454   18056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 22:51:15.454470   18056 system_pods.go:86] 18 kube-system pods found
	I0803 22:51:15.454488   18056 system_pods.go:89] "coredns-7db6d8ff4d-hbp7b" [f9309e8e-3027-46d2-b989-2f285fcf10f4] Running
	I0803 22:51:15.454496   18056 system_pods.go:89] "csi-hostpath-attacher-0" [d5c3e8a0-1571-4ee3-a3cb-c726b1bddccb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0803 22:51:15.454504   18056 system_pods.go:89] "csi-hostpath-resizer-0" [aa05ea21-0c03-4cc5-ba5d-4ef7dcce50b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0803 22:51:15.454512   18056 system_pods.go:89] "csi-hostpathplugin-cnwdb" [8d4d7011-2902-48df-a117-b7afc2e94916] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0803 22:51:15.454519   18056 system_pods.go:89] "etcd-addons-110246" [9586d714-7768-4e6c-93c7-1525119eef59] Running
	I0803 22:51:15.454525   18056 system_pods.go:89] "kube-apiserver-addons-110246" [5c7dc265-a4c7-4dfb-919d-cf428fcf1674] Running
	I0803 22:51:15.454531   18056 system_pods.go:89] "kube-controller-manager-addons-110246" [d568c53e-2834-4902-888b-b1627f65e978] Running
	I0803 22:51:15.454536   18056 system_pods.go:89] "kube-ingress-dns-minikube" [6a3fbc83-11d9-435d-87e5-1a494cf8c714] Running
	I0803 22:51:15.454542   18056 system_pods.go:89] "kube-proxy-lfl9m" [77bd9bb9-4577-4a8c-bdd2-970a32e4467b] Running
	I0803 22:51:15.454547   18056 system_pods.go:89] "kube-scheduler-addons-110246" [bbba425e-6b27-4154-81f2-3e80e941f607] Running
	I0803 22:51:15.454553   18056 system_pods.go:89] "metrics-server-c59844bb4-wbhpt" [bb904756-9056-4069-b53b-b35f8c0bde90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0803 22:51:15.454560   18056 system_pods.go:89] "nvidia-device-plugin-daemonset-f6gv6" [5d7278f7-553b-40c0-a2b4-059ba877ae75] Running
	I0803 22:51:15.454567   18056 system_pods.go:89] "registry-698f998955-4bhmt" [d9661cee-e4cd-468d-a421-0e709c62e138] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0803 22:51:15.454575   18056 system_pods.go:89] "registry-proxy-4sg2g" [df0da2d6-2cf2-471c-9b29-c471d61d67b5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0803 22:51:15.454584   18056 system_pods.go:89] "snapshot-controller-745499f584-8t6hx" [66934af4-c7e5-4ec2-a4c0-983cc9acc894] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0803 22:51:15.454592   18056 system_pods.go:89] "snapshot-controller-745499f584-pgmqb" [610d2e0a-47ed-4aa1-b767-2701c23b6276] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0803 22:51:15.454599   18056 system_pods.go:89] "storage-provisioner" [4abb12c4-8b99-40af-8da9-1f36ecb668a0] Running
	I0803 22:51:15.454604   18056 system_pods.go:89] "tiller-deploy-6677d64bcd-zv5cc" [479ff6dd-8760-4dec-8f87-d1236801993f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0803 22:51:15.454612   18056 system_pods.go:126] duration metric: took 8.152871ms to wait for k8s-apps to be running ...
	I0803 22:51:15.454618   18056 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 22:51:15.454659   18056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 22:51:15.469466   18056 system_svc.go:56] duration metric: took 14.837376ms WaitForService to wait for kubelet
	I0803 22:51:15.469491   18056 kubeadm.go:582] duration metric: took 28.931575342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 22:51:15.469512   18056 node_conditions.go:102] verifying NodePressure condition ...
	I0803 22:51:15.472479   18056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 22:51:15.472499   18056 node_conditions.go:123] node cpu capacity is 2
	I0803 22:51:15.472510   18056 node_conditions.go:105] duration metric: took 2.994661ms to run NodePressure ...
	I0803 22:51:15.472520   18056 start.go:241] waiting for startup goroutines ...
	I0803 22:51:15.670226   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:15.880710   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:15.944092   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:15.944634   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:16.171096   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:16.380721   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:16.443621   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:16.446055   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:16.671582   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:16.881429   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:16.944723   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:16.945537   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:17.174128   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:17.379968   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:17.445312   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:17.448021   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:17.672495   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:17.879768   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:17.947702   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:17.948318   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:18.171521   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:18.381612   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:18.444221   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:18.444696   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:18.671112   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:18.881086   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:18.943557   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:18.943978   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:19.170314   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:19.381559   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:19.443775   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:19.443880   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:19.670685   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:19.880371   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:19.943517   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:19.943888   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:20.170936   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:20.380367   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:20.444036   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:20.444134   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:20.671689   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:20.881325   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:20.944047   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:20.945881   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:21.170378   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:21.380649   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:21.443493   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:21.445759   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:21.671372   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:21.880241   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:21.944349   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:21.944882   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:22.171456   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:22.380684   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:22.443864   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:22.445048   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:22.671328   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:22.881570   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:22.942335   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:22.943889   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:23.170910   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:23.380906   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:23.443352   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:23.445363   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:23.671046   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:23.880488   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:23.944271   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:23.945887   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:24.170180   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:24.381207   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:24.442912   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:24.443937   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:24.671015   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:24.880204   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:24.943725   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:24.943956   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:25.170319   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:25.380785   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:25.443041   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:25.444632   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:25.671627   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:25.879711   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:25.943677   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:25.955507   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:26.171297   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:26.424372   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:26.447844   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:26.447973   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:26.670428   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:26.882640   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:26.942681   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:26.943923   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:27.170052   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:27.383142   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:27.443297   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:27.444625   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:27.670720   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:27.880651   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:27.942864   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:27.944726   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:28.170075   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:28.380352   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:28.444854   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:28.445459   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:28.677213   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:28.881527   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:28.944312   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:28.944418   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:29.170384   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:29.380907   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:29.442963   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:29.445375   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:29.671473   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:29.879895   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:29.943245   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:29.945165   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:30.170635   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:30.379408   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:30.444771   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:30.445186   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:30.671615   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:31.052180   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:31.052632   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:31.053298   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:31.171471   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:31.380565   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:31.442670   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:31.444672   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:31.670655   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:31.880766   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:31.945195   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:31.947153   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:32.170445   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:32.381646   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:32.443114   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:32.444427   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:32.671253   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:32.879758   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:32.943419   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:32.943637   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:33.171460   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:33.380084   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:33.443598   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:33.443710   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:33.679753   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:33.880092   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:33.944647   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:33.946403   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:34.171348   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:34.381106   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:34.445403   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:34.445619   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:35.061412   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:35.064619   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:35.065104   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:35.065216   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:35.170678   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:35.380316   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:35.444411   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:35.446330   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:35.674539   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:35.879919   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:35.943063   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:35.944180   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:36.170594   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:36.380326   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:36.444695   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:36.445002   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:36.680922   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:36.880854   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:36.942980   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:36.943841   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:37.170792   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:37.380611   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:37.443247   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:37.443756   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:37.677827   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:37.880245   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:37.943557   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:37.945405   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:38.173182   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:38.380257   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:38.444590   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:38.444609   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:38.670997   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:38.880401   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:38.944555   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:38.945064   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:39.171635   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:39.380099   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:39.443087   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:39.446292   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:39.670880   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:39.879961   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:39.943683   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:39.945213   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:40.170459   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:40.380660   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:40.443733   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:40.444446   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:40.671064   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:40.880070   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:40.945441   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:40.945765   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:41.389730   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:41.394067   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:41.447475   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:41.450661   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:41.670823   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:41.880744   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:41.943951   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:41.944289   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:42.170778   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:42.381467   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:42.443877   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:51:42.446564   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:42.671207   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:42.880358   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:42.945511   18056 kapi.go:107] duration metric: took 47.007220848s to wait for kubernetes.io/minikube-addons=registry ...
	I0803 22:51:42.945780   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:43.169944   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:43.380178   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:43.444283   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:43.671691   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:43.879993   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:43.943647   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:44.171070   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:44.380998   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:44.446180   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:44.670377   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:44.880196   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:44.947182   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:45.170566   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:45.380448   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:45.445386   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:45.671217   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:45.880913   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:45.943771   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:46.170368   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:46.380245   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:46.443961   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:46.671330   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:46.880504   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:46.946939   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:47.174028   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:47.380537   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:47.444627   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:47.671875   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:47.880309   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:47.944939   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:48.191662   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:48.380321   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:48.444020   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:48.673131   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:48.880707   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:48.944505   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:49.171930   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:49.381395   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:49.443690   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:49.670210   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:49.879574   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:49.945606   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:50.171479   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:50.380415   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:50.444854   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:50.671587   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:50.881894   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:50.943672   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:51.170541   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:51.380751   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:51.443742   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:51.670692   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:51.879809   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:51.943694   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:52.170849   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:52.380823   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:52.444069   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:52.673630   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:52.879977   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:52.943806   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:53.171768   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:53.380336   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:53.445404   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:53.671813   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:53.880146   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:53.944079   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:54.170924   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:54.380726   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:54.443511   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:54.671307   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:54.880260   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:54.943733   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:55.170330   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:55.380604   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:55.444871   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:55.843456   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:55.881135   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:55.944524   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:56.171969   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:56.380477   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:56.445059   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:56.669969   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:56.879976   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:56.951692   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:57.170771   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:57.380611   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:57.444440   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:57.671010   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:57.881241   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:57.944211   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:58.170744   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:58.380504   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:58.444439   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:58.671388   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:58.880374   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:58.944348   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:59.171176   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:59.379842   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:59.444538   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:59.671019   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:59.887138   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:59.946427   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:00.171339   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:00.380449   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:00.444113   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:00.671272   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:00.881077   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:00.943639   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:01.170842   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:01.380410   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:01.444300   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:01.670535   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:01.879921   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:01.943696   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:02.172403   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:02.380747   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:02.444412   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:03.026190   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:03.030780   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:03.031222   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:03.171206   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:03.380662   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:03.446422   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:03.671096   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:03.881727   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:03.946269   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:04.177770   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:04.380081   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:04.450682   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:04.682353   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:04.891611   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:04.945520   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:05.171287   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:05.380377   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:05.445041   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:05.671340   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:05.880350   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:05.945248   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:06.175882   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:06.382941   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:06.444218   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:06.670484   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:06.880988   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:06.943918   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:07.171535   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:07.380250   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:07.452156   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:07.670406   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:07.881102   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:07.944375   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:08.171051   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:08.380930   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:08.444403   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:08.671184   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:08.880751   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:08.943799   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:09.170984   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:09.384160   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:09.444297   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:09.670161   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:09.883199   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:09.946918   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:10.171473   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:10.379854   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:10.444634   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:10.671481   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:10.880741   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:10.943801   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:11.171793   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:11.380232   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:11.447603   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:11.670600   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:11.881032   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:11.944006   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:12.170590   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:52:12.381052   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:12.444805   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:12.670098   18056 kapi.go:107] duration metric: took 1m15.50493638s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0803 22:52:12.881871   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:12.944445   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:13.380991   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:13.444048   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:13.880339   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:13.943752   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:14.380155   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:14.444490   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:14.881135   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:14.944280   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:15.380462   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:15.444654   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:15.881076   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:15.944761   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:16.380127   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:16.444361   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:16.880470   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:16.944621   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:17.381749   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:17.444591   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:17.881460   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:17.945385   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:18.381014   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:18.444744   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:18.881226   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:18.944836   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:19.381300   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:19.445472   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:19.880874   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:19.944627   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:20.380794   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:20.444092   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:20.880499   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:20.947760   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:21.380534   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:21.444533   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:21.880365   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:21.944077   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:22.381039   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:22.444615   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:22.881083   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:22.944541   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:23.380537   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:23.444424   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:23.882483   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:23.944530   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:24.381248   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:24.444539   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:24.880706   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:24.945111   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:25.380141   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:25.445954   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:25.880073   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:25.943655   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:26.380853   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:26.444099   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:26.880314   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:26.944346   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:27.380660   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:27.443867   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:27.881071   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:27.943831   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:28.380622   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:28.447435   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:28.880272   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:28.944704   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:29.381226   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:29.445035   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:29.880041   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:29.943926   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:30.381788   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:30.443917   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:30.881739   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:30.943889   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:31.379975   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:31.443944   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:31.880080   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:31.944016   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:32.380697   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:32.444063   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:32.880185   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:32.944584   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:33.381822   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:33.444441   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:33.880892   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:33.944008   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:34.380357   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:34.444481   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:34.881603   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:34.944717   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:35.382252   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:35.445026   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:35.880736   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:35.944025   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:36.380129   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:36.444106   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:36.880419   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:36.944183   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:37.382034   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:37.444467   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:37.880806   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:37.944053   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:38.380242   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:38.444733   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:38.879863   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:38.944464   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:39.380948   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:39.444563   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:39.881281   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:39.944236   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:40.380585   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:40.445194   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:40.880249   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:40.944414   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:41.380393   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:41.445861   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:41.880685   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:41.943770   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:42.380916   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:42.444493   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:42.881179   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:42.944164   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:43.380095   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:43.444785   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:43.879887   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:43.944213   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:44.381820   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:44.443861   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:44.881250   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:44.944102   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:45.379962   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:45.444312   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:45.880619   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:45.945526   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:46.381667   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:46.445025   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:46.880484   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:46.944711   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:47.380234   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:47.445192   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:47.881176   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:47.943897   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:48.380253   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:48.444631   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:48.880715   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:48.944669   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:49.383647   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:49.444899   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:49.880271   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:49.945703   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:50.381990   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:50.444442   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:50.882035   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:50.944452   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:51.380219   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:51.444518   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:51.880816   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:51.943839   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:52.381176   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:52.444720   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:52.880500   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:52.944728   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:53.381222   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:53.444202   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:53.884002   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:53.947813   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:54.379947   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:54.444321   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:54.880590   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:54.944830   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:55.380713   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:55.443973   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:55.880138   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:55.943992   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:56.379849   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:56.443914   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:56.879975   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:56.943944   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:57.382066   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:57.444701   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:57.880401   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:57.944477   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:58.380315   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:58.444181   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:58.880217   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:58.944292   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:59.380422   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:59.444893   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:52:59.879937   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:59.944239   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:00.380402   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:00.444474   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:00.880912   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:00.944133   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:01.380019   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:01.444152   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:01.880499   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:01.945985   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:02.380086   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:02.444558   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:02.880740   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:02.943886   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:03.380509   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:03.444478   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:03.882042   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:03.944570   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:04.381497   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:04.445880   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:04.880382   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:04.943999   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:05.380464   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:05.444673   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:05.881052   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:05.944205   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:06.380439   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:06.444752   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:06.879729   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:06.943924   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:07.380131   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:07.445518   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:07.880700   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:07.945015   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:08.381111   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:08.444090   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:08.880173   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:08.944193   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:09.380396   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:09.444321   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:09.880864   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:09.944315   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:10.380559   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:10.444272   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:10.880333   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:10.943972   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:11.379958   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:11.444098   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:11.880174   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:11.944675   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:12.380162   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:12.444913   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:12.880554   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:12.944546   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:13.380683   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:13.444554   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:13.880746   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:13.944286   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:14.381586   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:14.447462   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:14.880884   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:14.943561   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:15.380454   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:15.444434   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:15.880942   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:15.944353   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:16.379773   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:16.452082   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:16.879647   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:16.943275   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:17.381171   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:17.444583   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:17.880445   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:17.944581   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:18.380669   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:18.443519   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:18.880671   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:18.944409   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:19.380293   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:19.444407   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:19.881228   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:19.944127   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:20.381456   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:20.444425   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:20.880278   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:20.944283   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:21.380305   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:21.444105   18056 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:53:21.880541   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:21.945060   18056 kapi.go:107] duration metric: took 2m26.005622747s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0803 22:53:22.385398   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:22.880523   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:23.379918   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:23.881514   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:24.380076   18056 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:53:24.881796   18056 kapi.go:107] duration metric: took 2m26.005315305s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0803 22:53:24.883443   18056 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-110246 cluster.
	I0803 22:53:24.884620   18056 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0803 22:53:24.885719   18056 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0803 22:53:24.886887   18056 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, nvidia-device-plugin, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0803 22:53:24.888065   18056 addons.go:510] duration metric: took 2m38.350114202s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget nvidia-device-plugin yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0803 22:53:24.888099   18056 start.go:246] waiting for cluster config update ...
	I0803 22:53:24.888119   18056 start.go:255] writing updated cluster config ...
	I0803 22:53:24.888396   18056 ssh_runner.go:195] Run: rm -f paused
	I0803 22:53:24.938364   18056 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0803 22:53:24.940116   18056 out.go:177] * Done! kubectl is now configured to use "addons-110246" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.213161303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722725965213131407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7aa5a58f-d2ba-4bf5-8388-cd101b49e6bc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.213835627Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb753849-c614-45d3-add4-81e48016ea39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.213896260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb753849-c614-45d3-add4-81e48016ea39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.214145218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7bb600c6f663657e9ff2fecffd5ddbc554be92da20e24a1661a25f8a56cc417,PodSandboxId:de174b7cf4ebb1ff3570248169673beb843ed29bbc4916e6c17d1d574cc05095,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722725835021464801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-ssxwk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edf29210-e2de-4bb2-885a-b86e2ea89fda,},Annotations:map[string]string{io.kubernetes.container.hash: e1e3c492,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a2a6d788927136bbd6b0d338da6e707114989773a9bf37f7f004ae5c45f49a,PodSandboxId:3555045f0b2acb65cbd2b611affc11851d9d305249a7be0879c761cee6081881,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722725694192935137,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df82dc23-6a96-474c-90c3-83927b83004d,},Annotations:map[string]string{io.kubernet
es.container.hash: 3192669,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88886060efe9b80726b0163f49d9e06732c4200aebf30b3558e7c6ef1b64191,PodSandboxId:ca16c417d8b2fb0887a378b217f4631ff68847525dbb3e3df13262d04db867c1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722725610810290216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb166619-49c3-48d6-8b
0e-aef40c36a54e,},Annotations:map[string]string{io.kubernetes.container.hash: bb723707,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820918c8d761eb1b5ba780f6d7b16ea38fa9fdbc956267f7567f65217b6b26e,PodSandboxId:6f48d692b5da414c64e36be336fdafcb580bfa48d15c1f592a3dff62c3815a37,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722725504502137835,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-wbhpt,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: bb904756-9056-4069-b53b-b35f8c0bde90,},Annotations:map[string]string{io.kubernetes.container.hash: fc9b69ad,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca6528238e5e51859a0c676bd684cca55eece8b443052df4eeebde188634715,PodSandboxId:90e74ade0381446d40631423aec63d569f691a131283c7345732534efbceff96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722725453349499106,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,i
o.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4abb12c4-8b99-40af-8da9-1f36ecb668a0,},Annotations:map[string]string{io.kubernetes.container.hash: 344623a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319bbb8331ea75f685cc311227fb4b0a7c55495dae263025bb13d01c1768ca7f,PodSandboxId:66b36c9e8efbdaea9ee3173b7ab00ecfd2461f86f69f3ffe4e63e5381855afdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722725450678216260,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-hbp7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9309e8e-3027-46d2-b989-2f285fcf10f4,},Annotations:map[string]string{io.kubernetes.container.hash: 13b8ab0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314af48a2afb05bd8ffa1c1fb970955f6d2a8456e4994365714c716f65ea906f,PodSandboxId:95e040ab7ea7008cfb93dc9653f6aafc08c8a243e9808206807f148a9d54d577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722725448007772807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfl9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77bd9bb9-4577-4a8c-bdd2-970a32e4467b,},Annotations:map[string]string{io.kubernetes.container.hash: adc08610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dceba7bfaac39df8f29b99d4d543c47a7100131a23ed92feacdfcaf2ef7efd2,PodSandboxId:8b54663b3bc8c9cf5acbbf1b5b7cb80c1974562667ca2806ad06c193b4764165,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273
badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722725427859901938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76e36030b906568b4cf9b484097683b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1160ba49f76d2377defb95d70fbf7c9a6b02c5a9146cde8fc9fe9e9ca86ac2eb,PodSandboxId:3279939d63505401b2016dfb25b692f6d80746089d95c4bcf2205933158292f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a
83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722725427868705147,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19c77aa6a9ed591032cbe3a2a15eae3,},Annotations:map[string]string{io.kubernetes.container.hash: 25646d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c7d5bcf5b65c4346bf3a483804db253b3bd75a46f8b4de9b1b457ff70397d1,PodSandboxId:1e6afb16b919742e6f309063fcaad698565e27e0eb5bd52b749fbb1d4e754938,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNIN
G,CreatedAt:1722725427805553588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 807c4463fad0440cc25c5dd70b946b98,},Annotations:map[string]string{io.kubernetes.container.hash: 23947cef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc2277cc91fde6d12da0df934beae4bfebc8572f161104e46027ea35c834717,PodSandboxId:8b396e9d2266cb7b77cf603c81b8458da9a29f5e0312e23693be3336cf7ae01c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:17227
25427809104889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9663ccb7cf85506aa8bb62dc8cd9fe6a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb753849-c614-45d3-add4-81e48016ea39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.263729199Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7896190a-844e-4158-a759-2100538dec74 name=/runtime.v1.RuntimeService/Version
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.263802462Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7896190a-844e-4158-a759-2100538dec74 name=/runtime.v1.RuntimeService/Version
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.265216650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4083b8c7-f476-4be0-874d-24de9bb5206b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.266467272Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722725965266439502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4083b8c7-f476-4be0-874d-24de9bb5206b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.267137733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d49b0829-2fc8-4f05-ac3f-cb1d152c3bd0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.267467252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d49b0829-2fc8-4f05-ac3f-cb1d152c3bd0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.267890977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7bb600c6f663657e9ff2fecffd5ddbc554be92da20e24a1661a25f8a56cc417,PodSandboxId:de174b7cf4ebb1ff3570248169673beb843ed29bbc4916e6c17d1d574cc05095,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722725835021464801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-ssxwk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edf29210-e2de-4bb2-885a-b86e2ea89fda,},Annotations:map[string]string{io.kubernetes.container.hash: e1e3c492,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a2a6d788927136bbd6b0d338da6e707114989773a9bf37f7f004ae5c45f49a,PodSandboxId:3555045f0b2acb65cbd2b611affc11851d9d305249a7be0879c761cee6081881,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722725694192935137,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df82dc23-6a96-474c-90c3-83927b83004d,},Annotations:map[string]string{io.kubernet
es.container.hash: 3192669,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88886060efe9b80726b0163f49d9e06732c4200aebf30b3558e7c6ef1b64191,PodSandboxId:ca16c417d8b2fb0887a378b217f4631ff68847525dbb3e3df13262d04db867c1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722725610810290216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb166619-49c3-48d6-8b
0e-aef40c36a54e,},Annotations:map[string]string{io.kubernetes.container.hash: bb723707,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820918c8d761eb1b5ba780f6d7b16ea38fa9fdbc956267f7567f65217b6b26e,PodSandboxId:6f48d692b5da414c64e36be336fdafcb580bfa48d15c1f592a3dff62c3815a37,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722725504502137835,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-wbhpt,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: bb904756-9056-4069-b53b-b35f8c0bde90,},Annotations:map[string]string{io.kubernetes.container.hash: fc9b69ad,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca6528238e5e51859a0c676bd684cca55eece8b443052df4eeebde188634715,PodSandboxId:90e74ade0381446d40631423aec63d569f691a131283c7345732534efbceff96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722725453349499106,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,i
o.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4abb12c4-8b99-40af-8da9-1f36ecb668a0,},Annotations:map[string]string{io.kubernetes.container.hash: 344623a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319bbb8331ea75f685cc311227fb4b0a7c55495dae263025bb13d01c1768ca7f,PodSandboxId:66b36c9e8efbdaea9ee3173b7ab00ecfd2461f86f69f3ffe4e63e5381855afdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722725450678216260,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-hbp7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9309e8e-3027-46d2-b989-2f285fcf10f4,},Annotations:map[string]string{io.kubernetes.container.hash: 13b8ab0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314af48a2afb05bd8ffa1c1fb970955f6d2a8456e4994365714c716f65ea906f,PodSandboxId:95e040ab7ea7008cfb93dc9653f6aafc08c8a243e9808206807f148a9d54d577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722725448007772807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfl9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77bd9bb9-4577-4a8c-bdd2-970a32e4467b,},Annotations:map[string]string{io.kubernetes.container.hash: adc08610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dceba7bfaac39df8f29b99d4d543c47a7100131a23ed92feacdfcaf2ef7efd2,PodSandboxId:8b54663b3bc8c9cf5acbbf1b5b7cb80c1974562667ca2806ad06c193b4764165,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273
badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722725427859901938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76e36030b906568b4cf9b484097683b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1160ba49f76d2377defb95d70fbf7c9a6b02c5a9146cde8fc9fe9e9ca86ac2eb,PodSandboxId:3279939d63505401b2016dfb25b692f6d80746089d95c4bcf2205933158292f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a
83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722725427868705147,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19c77aa6a9ed591032cbe3a2a15eae3,},Annotations:map[string]string{io.kubernetes.container.hash: 25646d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c7d5bcf5b65c4346bf3a483804db253b3bd75a46f8b4de9b1b457ff70397d1,PodSandboxId:1e6afb16b919742e6f309063fcaad698565e27e0eb5bd52b749fbb1d4e754938,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNIN
G,CreatedAt:1722725427805553588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 807c4463fad0440cc25c5dd70b946b98,},Annotations:map[string]string{io.kubernetes.container.hash: 23947cef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc2277cc91fde6d12da0df934beae4bfebc8572f161104e46027ea35c834717,PodSandboxId:8b396e9d2266cb7b77cf603c81b8458da9a29f5e0312e23693be3336cf7ae01c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:17227
25427809104889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9663ccb7cf85506aa8bb62dc8cd9fe6a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d49b0829-2fc8-4f05-ac3f-cb1d152c3bd0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.310707046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fcfeda44-dc29-4340-aa74-a35e84d43704 name=/runtime.v1.RuntimeService/Version
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.310890864Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fcfeda44-dc29-4340-aa74-a35e84d43704 name=/runtime.v1.RuntimeService/Version
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.312047666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e23202b-9a08-4ba5-a2d7-fbb499a38c5c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.313715802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722725965313686957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e23202b-9a08-4ba5-a2d7-fbb499a38c5c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.314423519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc6730b5-6841-44ae-83be-e922ea5eccde name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.314478112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc6730b5-6841-44ae-83be-e922ea5eccde name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.314737981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7bb600c6f663657e9ff2fecffd5ddbc554be92da20e24a1661a25f8a56cc417,PodSandboxId:de174b7cf4ebb1ff3570248169673beb843ed29bbc4916e6c17d1d574cc05095,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722725835021464801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-ssxwk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edf29210-e2de-4bb2-885a-b86e2ea89fda,},Annotations:map[string]string{io.kubernetes.container.hash: e1e3c492,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a2a6d788927136bbd6b0d338da6e707114989773a9bf37f7f004ae5c45f49a,PodSandboxId:3555045f0b2acb65cbd2b611affc11851d9d305249a7be0879c761cee6081881,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722725694192935137,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df82dc23-6a96-474c-90c3-83927b83004d,},Annotations:map[string]string{io.kubernet
es.container.hash: 3192669,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88886060efe9b80726b0163f49d9e06732c4200aebf30b3558e7c6ef1b64191,PodSandboxId:ca16c417d8b2fb0887a378b217f4631ff68847525dbb3e3df13262d04db867c1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722725610810290216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb166619-49c3-48d6-8b
0e-aef40c36a54e,},Annotations:map[string]string{io.kubernetes.container.hash: bb723707,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820918c8d761eb1b5ba780f6d7b16ea38fa9fdbc956267f7567f65217b6b26e,PodSandboxId:6f48d692b5da414c64e36be336fdafcb580bfa48d15c1f592a3dff62c3815a37,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722725504502137835,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-wbhpt,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: bb904756-9056-4069-b53b-b35f8c0bde90,},Annotations:map[string]string{io.kubernetes.container.hash: fc9b69ad,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca6528238e5e51859a0c676bd684cca55eece8b443052df4eeebde188634715,PodSandboxId:90e74ade0381446d40631423aec63d569f691a131283c7345732534efbceff96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722725453349499106,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,i
o.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4abb12c4-8b99-40af-8da9-1f36ecb668a0,},Annotations:map[string]string{io.kubernetes.container.hash: 344623a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319bbb8331ea75f685cc311227fb4b0a7c55495dae263025bb13d01c1768ca7f,PodSandboxId:66b36c9e8efbdaea9ee3173b7ab00ecfd2461f86f69f3ffe4e63e5381855afdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722725450678216260,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-hbp7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9309e8e-3027-46d2-b989-2f285fcf10f4,},Annotations:map[string]string{io.kubernetes.container.hash: 13b8ab0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314af48a2afb05bd8ffa1c1fb970955f6d2a8456e4994365714c716f65ea906f,PodSandboxId:95e040ab7ea7008cfb93dc9653f6aafc08c8a243e9808206807f148a9d54d577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722725448007772807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfl9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77bd9bb9-4577-4a8c-bdd2-970a32e4467b,},Annotations:map[string]string{io.kubernetes.container.hash: adc08610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dceba7bfaac39df8f29b99d4d543c47a7100131a23ed92feacdfcaf2ef7efd2,PodSandboxId:8b54663b3bc8c9cf5acbbf1b5b7cb80c1974562667ca2806ad06c193b4764165,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273
badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722725427859901938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76e36030b906568b4cf9b484097683b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1160ba49f76d2377defb95d70fbf7c9a6b02c5a9146cde8fc9fe9e9ca86ac2eb,PodSandboxId:3279939d63505401b2016dfb25b692f6d80746089d95c4bcf2205933158292f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a
83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722725427868705147,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19c77aa6a9ed591032cbe3a2a15eae3,},Annotations:map[string]string{io.kubernetes.container.hash: 25646d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c7d5bcf5b65c4346bf3a483804db253b3bd75a46f8b4de9b1b457ff70397d1,PodSandboxId:1e6afb16b919742e6f309063fcaad698565e27e0eb5bd52b749fbb1d4e754938,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNIN
G,CreatedAt:1722725427805553588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 807c4463fad0440cc25c5dd70b946b98,},Annotations:map[string]string{io.kubernetes.container.hash: 23947cef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc2277cc91fde6d12da0df934beae4bfebc8572f161104e46027ea35c834717,PodSandboxId:8b396e9d2266cb7b77cf603c81b8458da9a29f5e0312e23693be3336cf7ae01c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:17227
25427809104889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9663ccb7cf85506aa8bb62dc8cd9fe6a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc6730b5-6841-44ae-83be-e922ea5eccde name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.351573091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad5d3648-67cc-44e8-8b79-d67345817ee4 name=/runtime.v1.RuntimeService/Version
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.351667223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad5d3648-67cc-44e8-8b79-d67345817ee4 name=/runtime.v1.RuntimeService/Version
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.353112755Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79e3b514-bd45-40e5-9c34-2e42ba7131de name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.354797152Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722725965354771202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79e3b514-bd45-40e5-9c34-2e42ba7131de name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.355416337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94ba339e-d894-4d86-a667-5e024dd868c7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.355468424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94ba339e-d894-4d86-a667-5e024dd868c7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 22:59:25 addons-110246 crio[681]: time="2024-08-03 22:59:25.355716742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7bb600c6f663657e9ff2fecffd5ddbc554be92da20e24a1661a25f8a56cc417,PodSandboxId:de174b7cf4ebb1ff3570248169673beb843ed29bbc4916e6c17d1d574cc05095,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722725835021464801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-ssxwk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: edf29210-e2de-4bb2-885a-b86e2ea89fda,},Annotations:map[string]string{io.kubernetes.container.hash: e1e3c492,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a2a6d788927136bbd6b0d338da6e707114989773a9bf37f7f004ae5c45f49a,PodSandboxId:3555045f0b2acb65cbd2b611affc11851d9d305249a7be0879c761cee6081881,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722725694192935137,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: df82dc23-6a96-474c-90c3-83927b83004d,},Annotations:map[string]string{io.kubernet
es.container.hash: 3192669,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a88886060efe9b80726b0163f49d9e06732c4200aebf30b3558e7c6ef1b64191,PodSandboxId:ca16c417d8b2fb0887a378b217f4631ff68847525dbb3e3df13262d04db867c1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722725610810290216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb166619-49c3-48d6-8b
0e-aef40c36a54e,},Annotations:map[string]string{io.kubernetes.container.hash: bb723707,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2820918c8d761eb1b5ba780f6d7b16ea38fa9fdbc956267f7567f65217b6b26e,PodSandboxId:6f48d692b5da414c64e36be336fdafcb580bfa48d15c1f592a3dff62c3815a37,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722725504502137835,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-wbhpt,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: bb904756-9056-4069-b53b-b35f8c0bde90,},Annotations:map[string]string{io.kubernetes.container.hash: fc9b69ad,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca6528238e5e51859a0c676bd684cca55eece8b443052df4eeebde188634715,PodSandboxId:90e74ade0381446d40631423aec63d569f691a131283c7345732534efbceff96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722725453349499106,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,i
o.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4abb12c4-8b99-40af-8da9-1f36ecb668a0,},Annotations:map[string]string{io.kubernetes.container.hash: 344623a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:319bbb8331ea75f685cc311227fb4b0a7c55495dae263025bb13d01c1768ca7f,PodSandboxId:66b36c9e8efbdaea9ee3173b7ab00ecfd2461f86f69f3ffe4e63e5381855afdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722725450678216260,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-hbp7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9309e8e-3027-46d2-b989-2f285fcf10f4,},Annotations:map[string]string{io.kubernetes.container.hash: 13b8ab0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314af48a2afb05bd8ffa1c1fb970955f6d2a8456e4994365714c716f65ea906f,PodSandboxId:95e040ab7ea7008cfb93dc9653f6aafc08c8a243e9808206807f148a9d54d577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722725448007772807,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfl9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77bd9bb9-4577-4a8c-bdd2-970a32e4467b,},Annotations:map[string]string{io.kubernetes.container.hash: adc08610,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dceba7bfaac39df8f29b99d4d543c47a7100131a23ed92feacdfcaf2ef7efd2,PodSandboxId:8b54663b3bc8c9cf5acbbf1b5b7cb80c1974562667ca2806ad06c193b4764165,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273
badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722725427859901938,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76e36030b906568b4cf9b484097683b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1160ba49f76d2377defb95d70fbf7c9a6b02c5a9146cde8fc9fe9e9ca86ac2eb,PodSandboxId:3279939d63505401b2016dfb25b692f6d80746089d95c4bcf2205933158292f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a
83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722725427868705147,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19c77aa6a9ed591032cbe3a2a15eae3,},Annotations:map[string]string{io.kubernetes.container.hash: 25646d7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6c7d5bcf5b65c4346bf3a483804db253b3bd75a46f8b4de9b1b457ff70397d1,PodSandboxId:1e6afb16b919742e6f309063fcaad698565e27e0eb5bd52b749fbb1d4e754938,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNIN
G,CreatedAt:1722725427805553588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 807c4463fad0440cc25c5dd70b946b98,},Annotations:map[string]string{io.kubernetes.container.hash: 23947cef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc2277cc91fde6d12da0df934beae4bfebc8572f161104e46027ea35c834717,PodSandboxId:8b396e9d2266cb7b77cf603c81b8458da9a29f5e0312e23693be3336cf7ae01c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:17227
25427809104889,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-110246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9663ccb7cf85506aa8bb62dc8cd9fe6a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94ba339e-d894-4d86-a667-5e024dd868c7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d7bb600c6f663       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   de174b7cf4ebb       hello-world-app-6778b5fc9f-ssxwk
	33a2a6d788927       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago       Running             nginx                     0                   3555045f0b2ac       nginx
	a88886060efe9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   ca16c417d8b2f       busybox
	2820918c8d761       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   6f48d692b5da4       metrics-server-c59844bb4-wbhpt
	cca6528238e5e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   90e74ade03814       storage-provisioner
	319bbb8331ea7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   66b36c9e8efbd       coredns-7db6d8ff4d-hbp7b
	314af48a2afb0       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        8 minutes ago       Running             kube-proxy                0                   95e040ab7ea70       kube-proxy-lfl9m
	1160ba49f76d2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   3279939d63505       etcd-addons-110246
	3dceba7bfaac3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        8 minutes ago       Running             kube-controller-manager   0                   8b54663b3bc8c       kube-controller-manager-addons-110246
	4cc2277cc91fd       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        8 minutes ago       Running             kube-scheduler            0                   8b396e9d2266c       kube-scheduler-addons-110246
	f6c7d5bcf5b65       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        8 minutes ago       Running             kube-apiserver            0                   1e6afb16b9197       kube-apiserver-addons-110246
	
	
	==> coredns [319bbb8331ea75f685cc311227fb4b0a7c55495dae263025bb13d01c1768ca7f] <==
	[INFO] 10.244.0.7:46429 - 34813 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000618261s
	[INFO] 10.244.0.7:39065 - 2231 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093225s
	[INFO] 10.244.0.7:39065 - 28808 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064501s
	[INFO] 10.244.0.7:47178 - 40385 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083219s
	[INFO] 10.244.0.7:47178 - 40647 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051866s
	[INFO] 10.244.0.7:33924 - 58568 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000084451s
	[INFO] 10.244.0.7:33924 - 20681 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000064526s
	[INFO] 10.244.0.7:42520 - 26215 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075904s
	[INFO] 10.244.0.7:42520 - 32362 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075269s
	[INFO] 10.244.0.7:54734 - 26138 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051652s
	[INFO] 10.244.0.7:54734 - 6757 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000133727s
	[INFO] 10.244.0.7:33086 - 44761 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047933s
	[INFO] 10.244.0.7:33086 - 34263 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003706s
	[INFO] 10.244.0.7:56729 - 39231 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000036905s
	[INFO] 10.244.0.7:56729 - 64317 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000039048s
	[INFO] 10.244.0.22:46021 - 16683 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00049237s
	[INFO] 10.244.0.22:41228 - 17048 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181134s
	[INFO] 10.244.0.22:41641 - 54046 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091817s
	[INFO] 10.244.0.22:49214 - 42036 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000062518s
	[INFO] 10.244.0.22:53977 - 49572 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116389s
	[INFO] 10.244.0.22:42481 - 20413 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000179361s
	[INFO] 10.244.0.22:34241 - 26941 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.002271059s
	[INFO] 10.244.0.22:38459 - 42482 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002947553s
	[INFO] 10.244.0.26:54543 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000389559s
	[INFO] 10.244.0.26:39747 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000156887s
	
	
	==> describe nodes <==
	Name:               addons-110246
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-110246
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=addons-110246
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T22_50_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-110246
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 22:50:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-110246
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 22:59:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 22:57:42 +0000   Sat, 03 Aug 2024 22:50:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 22:57:42 +0000   Sat, 03 Aug 2024 22:50:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 22:57:42 +0000   Sat, 03 Aug 2024 22:50:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 22:57:42 +0000   Sat, 03 Aug 2024 22:50:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    addons-110246
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 b301beb5cf5941c6ab473fb46617cd1b
	  System UUID:                b301beb5-cf59-41c6-ab47-3fb46617cd1b
	  Boot ID:                    65f6a715-e3fe-407e-a1ea-bee8318505e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  default                     hello-world-app-6778b5fc9f-ssxwk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 coredns-7db6d8ff4d-hbp7b                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m39s
	  kube-system                 etcd-addons-110246                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m52s
	  kube-system                 kube-apiserver-addons-110246             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m52s
	  kube-system                 kube-controller-manager-addons-110246    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m52s
	  kube-system                 kube-proxy-lfl9m                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 kube-scheduler-addons-110246             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m52s
	  kube-system                 metrics-server-c59844bb4-wbhpt           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         8m33s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m36s                  kube-proxy       
	  Normal  Starting                 8m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m58s (x8 over 8m58s)  kubelet          Node addons-110246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m58s (x8 over 8m58s)  kubelet          Node addons-110246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m58s (x7 over 8m58s)  kubelet          Node addons-110246 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m52s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m52s                  kubelet          Node addons-110246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m52s                  kubelet          Node addons-110246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m52s                  kubelet          Node addons-110246 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m51s                  kubelet          Node addons-110246 status is now: NodeReady
	  Normal  RegisteredNode           8m39s                  node-controller  Node addons-110246 event: Registered Node addons-110246 in Controller
	
	
	==> dmesg <==
	[  +9.994140] kauditd_printk_skb: 27 callbacks suppressed
	[  +8.757392] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.223665] kauditd_printk_skb: 2 callbacks suppressed
	[Aug 3 22:52] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.002045] kauditd_printk_skb: 81 callbacks suppressed
	[  +5.252506] kauditd_printk_skb: 9 callbacks suppressed
	[ +37.165201] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 3 22:53] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.356321] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.369691] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.280988] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.775656] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.775213] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.087647] kauditd_printk_skb: 21 callbacks suppressed
	[Aug 3 22:54] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.158994] kauditd_printk_skb: 63 callbacks suppressed
	[ +13.999768] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.397076] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.239854] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.052740] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.219253] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.267933] kauditd_printk_skb: 22 callbacks suppressed
	[Aug 3 22:55] kauditd_printk_skb: 33 callbacks suppressed
	[Aug 3 22:57] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.205456] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [1160ba49f76d2377defb95d70fbf7c9a6b02c5a9146cde8fc9fe9e9ca86ac2eb] <==
	{"level":"info","ts":"2024-08-03T22:51:41.377136Z","caller":"traceutil/trace.go:171","msg":"trace[714241903] transaction","detail":"{read_only:false; response_revision:972; number_of_response:1; }","duration":"294.592682ms","start":"2024-08-03T22:51:41.082537Z","end":"2024-08-03T22:51:41.377129Z","steps":["trace[714241903] 'process raft request'  (duration: 293.982426ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-03T22:51:41.3784Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.65254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85094"}
	{"level":"info","ts":"2024-08-03T22:51:41.379607Z","caller":"traceutil/trace.go:171","msg":"trace[871921271] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:972; }","duration":"221.887538ms","start":"2024-08-03T22:51:41.157708Z","end":"2024-08-03T22:51:41.379595Z","steps":["trace[871921271] 'agreement among raft nodes before linearized reading'  (duration: 219.183412ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:51:55.829604Z","caller":"traceutil/trace.go:171","msg":"trace[1535032267] linearizableReadLoop","detail":"{readStateIndex:1044; appliedIndex:1043; }","duration":"171.632725ms","start":"2024-08-03T22:51:55.657951Z","end":"2024-08-03T22:51:55.829584Z","steps":["trace[1535032267] 'read index received'  (duration: 171.504959ms)","trace[1535032267] 'applied index is now lower than readState.Index'  (duration: 127.323µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-03T22:51:55.829971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.957141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85371"}
	{"level":"info","ts":"2024-08-03T22:51:55.829999Z","caller":"traceutil/trace.go:171","msg":"trace[1590357607] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1013; }","duration":"172.065255ms","start":"2024-08-03T22:51:55.657926Z","end":"2024-08-03T22:51:55.829991Z","steps":["trace[1590357607] 'agreement among raft nodes before linearized reading'  (duration: 171.744404ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:51:55.830146Z","caller":"traceutil/trace.go:171","msg":"trace[878050054] transaction","detail":"{read_only:false; response_revision:1013; number_of_response:1; }","duration":"363.251446ms","start":"2024-08-03T22:51:55.466881Z","end":"2024-08-03T22:51:55.830133Z","steps":["trace[878050054] 'process raft request'  (duration: 362.615561ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-03T22:51:55.830245Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-03T22:51:55.466865Z","time spent":"363.311266ms","remote":"127.0.0.1:54508","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1005 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-03T22:52:03.010966Z","caller":"traceutil/trace.go:171","msg":"trace[1507259759] linearizableReadLoop","detail":"{readStateIndex:1090; appliedIndex:1089; }","duration":"353.657155ms","start":"2024-08-03T22:52:02.657288Z","end":"2024-08-03T22:52:03.010945Z","steps":["trace[1507259759] 'read index received'  (duration: 353.519254ms)","trace[1507259759] 'applied index is now lower than readState.Index'  (duration: 137.279µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-03T22:52:03.010992Z","caller":"traceutil/trace.go:171","msg":"trace[744862575] transaction","detail":"{read_only:false; response_revision:1057; number_of_response:1; }","duration":"477.472748ms","start":"2024-08-03T22:52:02.533498Z","end":"2024-08-03T22:52:03.01097Z","steps":["trace[744862575] 'process raft request'  (duration: 477.29274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-03T22:52:03.011186Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-03T22:52:02.533482Z","time spent":"477.600855ms","remote":"127.0.0.1:54430","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":798,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-c59844bb4-wbhpt.17e859b85bc6d6a0\" mod_revision:1003 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-c59844bb4-wbhpt.17e859b85bc6d6a0\" value_size:704 lease:6583014823065036420 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-c59844bb4-wbhpt.17e859b85bc6d6a0\" > >"}
	{"level":"warn","ts":"2024-08-03T22:52:03.011267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.967202ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85462"}
	{"level":"info","ts":"2024-08-03T22:52:03.011343Z","caller":"traceutil/trace.go:171","msg":"trace[539765651] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1057; }","duration":"354.070685ms","start":"2024-08-03T22:52:02.657263Z","end":"2024-08-03T22:52:03.011334Z","steps":["trace[539765651] 'agreement among raft nodes before linearized reading'  (duration: 353.785371ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-03T22:52:03.011385Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-03T22:52:02.65725Z","time spent":"354.127824ms","remote":"127.0.0.1:54526","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85486,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-08-03T22:52:03.016886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.723818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11155"}
	{"level":"info","ts":"2024-08-03T22:52:03.016927Z","caller":"traceutil/trace.go:171","msg":"trace[933517965] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1058; }","duration":"147.79941ms","start":"2024-08-03T22:52:02.869118Z","end":"2024-08-03T22:52:03.016917Z","steps":["trace[933517965] 'agreement among raft nodes before linearized reading'  (duration: 147.621881ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:52:03.017221Z","caller":"traceutil/trace.go:171","msg":"trace[1748400383] transaction","detail":"{read_only:false; response_revision:1058; number_of_response:1; }","duration":"207.568273ms","start":"2024-08-03T22:52:02.809643Z","end":"2024-08-03T22:52:03.017212Z","steps":["trace[1748400383] 'process raft request'  (duration: 207.028054ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:52:41.340707Z","caller":"traceutil/trace.go:171","msg":"trace[1642624007] transaction","detail":"{read_only:false; response_revision:1211; number_of_response:1; }","duration":"111.909244ms","start":"2024-08-03T22:52:41.228782Z","end":"2024-08-03T22:52:41.340691Z","steps":["trace[1642624007] 'process raft request'  (duration: 111.793851ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:53:17.784235Z","caller":"traceutil/trace.go:171","msg":"trace[1355242074] linearizableReadLoop","detail":"{readStateIndex:1322; appliedIndex:1321; }","duration":"187.656328ms","start":"2024-08-03T22:53:17.596545Z","end":"2024-08-03T22:53:17.784201Z","steps":["trace[1355242074] 'read index received'  (duration: 183.972973ms)","trace[1355242074] 'applied index is now lower than readState.Index'  (duration: 3.682069ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-03T22:53:17.784638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.0156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-08-03T22:53:17.784828Z","caller":"traceutil/trace.go:171","msg":"trace[499869429] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1273; }","duration":"188.248811ms","start":"2024-08-03T22:53:17.596506Z","end":"2024-08-03T22:53:17.784754Z","steps":["trace[499869429] 'agreement among raft nodes before linearized reading'  (duration: 187.75768ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:54:12.824257Z","caller":"traceutil/trace.go:171","msg":"trace[1832367938] transaction","detail":"{read_only:false; response_revision:1640; number_of_response:1; }","duration":"165.337953ms","start":"2024-08-03T22:54:12.658854Z","end":"2024-08-03T22:54:12.824192Z","steps":["trace[1832367938] 'process raft request'  (duration: 165.233428ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:54:18.287859Z","caller":"traceutil/trace.go:171","msg":"trace[450811793] transaction","detail":"{read_only:false; response_revision:1659; number_of_response:1; }","duration":"129.598611ms","start":"2024-08-03T22:54:18.158244Z","end":"2024-08-03T22:54:18.287843Z","steps":["trace[450811793] 'process raft request'  (duration: 129.276389ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:54:25.053784Z","caller":"traceutil/trace.go:171","msg":"trace[871754638] transaction","detail":"{read_only:false; response_revision:1686; number_of_response:1; }","duration":"167.61343ms","start":"2024-08-03T22:54:24.886155Z","end":"2024-08-03T22:54:25.053769Z","steps":["trace[871754638] 'process raft request'  (duration: 167.496357ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T22:54:52.183009Z","caller":"traceutil/trace.go:171","msg":"trace[2089842323] transaction","detail":"{read_only:false; response_revision:1869; number_of_response:1; }","duration":"175.992276ms","start":"2024-08-03T22:54:52.006989Z","end":"2024-08-03T22:54:52.182982Z","steps":["trace[2089842323] 'process raft request'  (duration: 175.545582ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:59:25 up 9 min,  0 users,  load average: 0.08, 0.47, 0.38
	Linux addons-110246 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f6c7d5bcf5b65c4346bf3a483804db253b3bd75a46f8b4de9b1b457ff70397d1] <==
	E0803 22:52:52.565202       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.55.238:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.55.238:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.55.238:443: connect: connection refused
	I0803 22:52:52.641935       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0803 22:53:37.766073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:45212: use of closed network connection
	E0803 22:53:37.946678       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:45228: use of closed network connection
	I0803 22:54:05.044158       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.232.213"}
	E0803 22:54:16.056152       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0803 22:54:25.691634       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0803 22:54:36.155453       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.9:8443->10.244.0.30:57560: read: connection reset by peer
	I0803 22:54:43.874785       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0803 22:54:44.913009       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0803 22:54:49.389381       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0803 22:54:49.566910       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.132.199"}
	I0803 22:55:05.504042       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0803 22:55:05.504177       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0803 22:55:05.544256       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0803 22:55:05.544558       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0803 22:55:05.546893       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0803 22:55:05.547266       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0803 22:55:05.554953       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0803 22:55:05.555046       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0803 22:55:05.605818       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	W0803 22:55:06.548208       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0803 22:55:06.607391       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0803 22:55:06.607391       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0803 22:57:12.099821       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.52.191"}
	
	
	==> kube-controller-manager [3dceba7bfaac39df8f29b99d4d543c47a7100131a23ed92feacdfcaf2ef7efd2] <==
	I0803 22:57:14.244899       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0803 22:57:14.247607       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="11.391µs"
	I0803 22:57:14.254432       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0803 22:57:15.617185       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="7.442491ms"
	I0803 22:57:15.617668       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="44.845µs"
	I0803 22:57:24.295787       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0803 22:57:43.986584       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:57:43.986765       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:57:51.050239       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:57:51.050374       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:57:54.088693       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:57:54.088787       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:58:08.702189       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:58:08.702341       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:58:15.574275       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:58:15.574362       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:58:34.833362       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:58:34.833550       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:58:43.289734       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:58:43.289871       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:58:44.837676       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:58:44.837706       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0803 22:59:11.680903       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0803 22:59:11.681125       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0803 22:59:24.300451       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="10.757µs"
	
	
	==> kube-proxy [314af48a2afb05bd8ffa1c1fb970955f6d2a8456e4994365714c716f65ea906f] <==
	I0803 22:50:48.677270       1 server_linux.go:69] "Using iptables proxy"
	I0803 22:50:48.703273       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.9"]
	I0803 22:50:48.787529       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 22:50:48.787578       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 22:50:48.787598       1 server_linux.go:165] "Using iptables Proxier"
	I0803 22:50:48.790584       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 22:50:48.790854       1 server.go:872] "Version info" version="v1.30.3"
	I0803 22:50:48.790866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 22:50:48.792105       1 config.go:192] "Starting service config controller"
	I0803 22:50:48.792119       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 22:50:48.792145       1 config.go:101] "Starting endpoint slice config controller"
	I0803 22:50:48.792149       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 22:50:48.793106       1 config.go:319] "Starting node config controller"
	I0803 22:50:48.793114       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 22:50:48.894466       1 shared_informer.go:320] Caches are synced for service config
	I0803 22:50:48.894511       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 22:50:48.894478       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4cc2277cc91fde6d12da0df934beae4bfebc8572f161104e46027ea35c834717] <==
	W0803 22:50:30.514587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0803 22:50:30.517514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0803 22:50:31.347553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0803 22:50:31.347668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0803 22:50:31.378094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 22:50:31.378192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0803 22:50:31.400194       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0803 22:50:31.400241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0803 22:50:31.589990       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 22:50:31.590037       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0803 22:50:31.631617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0803 22:50:31.631820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0803 22:50:31.646583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 22:50:31.646632       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 22:50:31.668007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 22:50:31.668050       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0803 22:50:31.696226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 22:50:31.696273       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0803 22:50:31.727840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 22:50:31.729465       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0803 22:50:31.739966       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0803 22:50:31.740095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0803 22:50:31.795193       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 22:50:31.795242       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0803 22:50:34.496493       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 03 22:57:17 addons-110246 kubelet[1273]: I0803 22:57:17.607501    1273 scope.go:117] "RemoveContainer" containerID="f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b"
	Aug 03 22:57:17 addons-110246 kubelet[1273]: I0803 22:57:17.629536    1273 scope.go:117] "RemoveContainer" containerID="f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b"
	Aug 03 22:57:17 addons-110246 kubelet[1273]: E0803 22:57:17.630086    1273 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b\": container with ID starting with f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b not found: ID does not exist" containerID="f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b"
	Aug 03 22:57:17 addons-110246 kubelet[1273]: I0803 22:57:17.630112    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b"} err="failed to get container status \"f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b\": rpc error: code = NotFound desc = could not find container \"f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b\": container with ID starting with f9352f5b9121fc7b2a692080c7edd30d9186ff1f18e4d28fda8aee1cbe52bd5b not found: ID does not exist"
	Aug 03 22:57:19 addons-110246 kubelet[1273]: I0803 22:57:19.081224    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c07998a9-82f2-4874-ba2d-223974e5a260" path="/var/lib/kubelet/pods/c07998a9-82f2-4874-ba2d-223974e5a260/volumes"
	Aug 03 22:57:33 addons-110246 kubelet[1273]: E0803 22:57:33.092209    1273 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 22:57:33 addons-110246 kubelet[1273]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 22:57:33 addons-110246 kubelet[1273]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 22:57:33 addons-110246 kubelet[1273]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 22:57:33 addons-110246 kubelet[1273]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 22:57:36 addons-110246 kubelet[1273]: I0803 22:57:36.163210    1273 scope.go:117] "RemoveContainer" containerID="eaa60d007cf2158142a7ae364be1960f02a133016e8d49b4058e25861a8867ba"
	Aug 03 22:57:36 addons-110246 kubelet[1273]: I0803 22:57:36.179835    1273 scope.go:117] "RemoveContainer" containerID="a4cf51b1937c5bc0ef73d7b91c598c3233e9893b16bd8309528315f7a47f4b3d"
	Aug 03 22:58:06 addons-110246 kubelet[1273]: I0803 22:58:06.077860    1273 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 03 22:58:33 addons-110246 kubelet[1273]: E0803 22:58:33.093076    1273 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 22:58:33 addons-110246 kubelet[1273]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 22:58:33 addons-110246 kubelet[1273]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 22:58:33 addons-110246 kubelet[1273]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 22:58:33 addons-110246 kubelet[1273]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 22:59:24 addons-110246 kubelet[1273]: I0803 22:59:24.330846    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-ssxwk" podStartSLOduration=130.843212012 podStartE2EDuration="2m13.330801275s" podCreationTimestamp="2024-08-03 22:57:11 +0000 UTC" firstStartedPulling="2024-08-03 22:57:12.517929334 +0000 UTC m=+399.549466080" lastFinishedPulling="2024-08-03 22:57:15.005518595 +0000 UTC m=+402.037055343" observedRunningTime="2024-08-03 22:57:15.609289748 +0000 UTC m=+402.640826514" watchObservedRunningTime="2024-08-03 22:59:24.330801275 +0000 UTC m=+531.362338042"
	Aug 03 22:59:25 addons-110246 kubelet[1273]: I0803 22:59:25.745518    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bb904756-9056-4069-b53b-b35f8c0bde90-tmp-dir\") pod \"bb904756-9056-4069-b53b-b35f8c0bde90\" (UID: \"bb904756-9056-4069-b53b-b35f8c0bde90\") "
	Aug 03 22:59:25 addons-110246 kubelet[1273]: I0803 22:59:25.745572    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnc52\" (UniqueName: \"kubernetes.io/projected/bb904756-9056-4069-b53b-b35f8c0bde90-kube-api-access-lnc52\") pod \"bb904756-9056-4069-b53b-b35f8c0bde90\" (UID: \"bb904756-9056-4069-b53b-b35f8c0bde90\") "
	Aug 03 22:59:25 addons-110246 kubelet[1273]: I0803 22:59:25.746134    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb904756-9056-4069-b53b-b35f8c0bde90-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "bb904756-9056-4069-b53b-b35f8c0bde90" (UID: "bb904756-9056-4069-b53b-b35f8c0bde90"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 03 22:59:25 addons-110246 kubelet[1273]: I0803 22:59:25.749978    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb904756-9056-4069-b53b-b35f8c0bde90-kube-api-access-lnc52" (OuterVolumeSpecName: "kube-api-access-lnc52") pod "bb904756-9056-4069-b53b-b35f8c0bde90" (UID: "bb904756-9056-4069-b53b-b35f8c0bde90"). InnerVolumeSpecName "kube-api-access-lnc52". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 03 22:59:25 addons-110246 kubelet[1273]: I0803 22:59:25.846896    1273 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lnc52\" (UniqueName: \"kubernetes.io/projected/bb904756-9056-4069-b53b-b35f8c0bde90-kube-api-access-lnc52\") on node \"addons-110246\" DevicePath \"\""
	Aug 03 22:59:25 addons-110246 kubelet[1273]: I0803 22:59:25.846952    1273 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bb904756-9056-4069-b53b-b35f8c0bde90-tmp-dir\") on node \"addons-110246\" DevicePath \"\""
	
	
	==> storage-provisioner [cca6528238e5e51859a0c676bd684cca55eece8b443052df4eeebde188634715] <==
	I0803 22:50:53.972833       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0803 22:50:54.034710       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0803 22:50:54.034838       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0803 22:50:54.054681       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0803 22:50:54.054896       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-110246_29ffda3c-5de8-4822-b2d3-50fd51ed22cc!
	I0803 22:50:54.055288       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"14ab441a-6864-49e0-8517-a57d647f6b8a", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-110246_29ffda3c-5de8-4822-b2d3-50fd51ed22cc became leader
	I0803 22:50:54.155700       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-110246_29ffda3c-5de8-4822-b2d3-50fd51ed22cc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-110246 -n addons-110246
helpers_test.go:261: (dbg) Run:  kubectl --context addons-110246 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (322.54s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-110246
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-110246: exit status 82 (2m0.465102566s)

                                                
                                                
-- stdout --
	* Stopping node "addons-110246"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-110246" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-110246
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-110246: exit status 11 (21.668270553s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.9:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-110246" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-110246
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-110246: exit status 11 (6.144932796s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.9:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-110246" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-110246
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-110246: exit status 11 (6.142411844s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.9:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-110246" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 node stop m02 -v=7 --alsologtostderr
E0803 23:13:27.616889   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:13:41.850942   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076508 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.479650005s)

                                                
                                                
-- stdout --
	* Stopping node "ha-076508-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:12:43.766750   32520 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:12:43.767002   32520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:12:43.767011   32520 out.go:304] Setting ErrFile to fd 2...
	I0803 23:12:43.767015   32520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:12:43.767266   32520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:12:43.767522   32520 mustload.go:65] Loading cluster: ha-076508
	I0803 23:12:43.767947   32520 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:12:43.767972   32520 stop.go:39] StopHost: ha-076508-m02
	I0803 23:12:43.768455   32520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:12:43.768497   32520 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:12:43.784857   32520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37937
	I0803 23:12:43.785525   32520 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:12:43.786213   32520 main.go:141] libmachine: Using API Version  1
	I0803 23:12:43.786235   32520 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:12:43.786619   32520 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:12:43.788914   32520 out.go:177] * Stopping node "ha-076508-m02"  ...
	I0803 23:12:43.790100   32520 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0803 23:12:43.790149   32520 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:12:43.790396   32520 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0803 23:12:43.790430   32520 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:12:43.793513   32520 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:12:43.793902   32520 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:12:43.793935   32520 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:12:43.794069   32520 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:12:43.794255   32520 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:12:43.794429   32520 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:12:43.794596   32520 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	I0803 23:12:43.881566   32520 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0803 23:12:43.935676   32520 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0803 23:12:43.991196   32520 main.go:141] libmachine: Stopping "ha-076508-m02"...
	I0803 23:12:43.991246   32520 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:12:43.992657   32520 main.go:141] libmachine: (ha-076508-m02) Calling .Stop
	I0803 23:12:43.996363   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 0/120
	I0803 23:12:44.997891   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 1/120
	I0803 23:12:45.999719   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 2/120
	I0803 23:12:47.000988   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 3/120
	I0803 23:12:48.002814   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 4/120
	I0803 23:12:49.005244   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 5/120
	I0803 23:12:50.006725   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 6/120
	I0803 23:12:51.008291   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 7/120
	I0803 23:12:52.010411   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 8/120
	I0803 23:12:53.011871   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 9/120
	I0803 23:12:54.013929   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 10/120
	I0803 23:12:55.015858   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 11/120
	I0803 23:12:56.017319   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 12/120
	I0803 23:12:57.018807   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 13/120
	I0803 23:12:58.020380   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 14/120
	I0803 23:12:59.022835   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 15/120
	I0803 23:13:00.024339   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 16/120
	I0803 23:13:01.026680   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 17/120
	I0803 23:13:02.027886   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 18/120
	I0803 23:13:03.029278   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 19/120
	I0803 23:13:04.031033   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 20/120
	I0803 23:13:05.032403   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 21/120
	I0803 23:13:06.034566   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 22/120
	I0803 23:13:07.036130   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 23/120
	I0803 23:13:08.037733   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 24/120
	I0803 23:13:09.039757   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 25/120
	I0803 23:13:10.041795   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 26/120
	I0803 23:13:11.043244   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 27/120
	I0803 23:13:12.044849   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 28/120
	I0803 23:13:13.046708   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 29/120
	I0803 23:13:14.049254   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 30/120
	I0803 23:13:15.051440   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 31/120
	I0803 23:13:16.053825   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 32/120
	I0803 23:13:17.056035   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 33/120
	I0803 23:13:18.057722   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 34/120
	I0803 23:13:19.059997   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 35/120
	I0803 23:13:20.061423   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 36/120
	I0803 23:13:21.063300   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 37/120
	I0803 23:13:22.064766   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 38/120
	I0803 23:13:23.066080   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 39/120
	I0803 23:13:24.068144   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 40/120
	I0803 23:13:25.069544   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 41/120
	I0803 23:13:26.071802   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 42/120
	I0803 23:13:27.072985   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 43/120
	I0803 23:13:28.074503   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 44/120
	I0803 23:13:29.076608   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 45/120
	I0803 23:13:30.077757   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 46/120
	I0803 23:13:31.079725   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 47/120
	I0803 23:13:32.081211   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 48/120
	I0803 23:13:33.083009   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 49/120
	I0803 23:13:34.085580   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 50/120
	I0803 23:13:35.088108   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 51/120
	I0803 23:13:36.089641   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 52/120
	I0803 23:13:37.091909   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 53/120
	I0803 23:13:38.094309   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 54/120
	I0803 23:13:39.096665   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 55/120
	I0803 23:13:40.098080   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 56/120
	I0803 23:13:41.099954   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 57/120
	I0803 23:13:42.101316   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 58/120
	I0803 23:13:43.103170   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 59/120
	I0803 23:13:44.105574   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 60/120
	I0803 23:13:45.107886   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 61/120
	I0803 23:13:46.109226   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 62/120
	I0803 23:13:47.110730   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 63/120
	I0803 23:13:48.112089   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 64/120
	I0803 23:13:49.113859   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 65/120
	I0803 23:13:50.115734   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 66/120
	I0803 23:13:51.118115   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 67/120
	I0803 23:13:52.119738   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 68/120
	I0803 23:13:53.121054   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 69/120
	I0803 23:13:54.123325   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 70/120
	I0803 23:13:55.124886   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 71/120
	I0803 23:13:56.126184   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 72/120
	I0803 23:13:57.127873   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 73/120
	I0803 23:13:58.129698   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 74/120
	I0803 23:13:59.131598   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 75/120
	I0803 23:14:00.133703   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 76/120
	I0803 23:14:01.135069   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 77/120
	I0803 23:14:02.136340   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 78/120
	I0803 23:14:03.137760   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 79/120
	I0803 23:14:04.139893   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 80/120
	I0803 23:14:05.141477   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 81/120
	I0803 23:14:06.143007   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 82/120
	I0803 23:14:07.144665   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 83/120
	I0803 23:14:08.146074   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 84/120
	I0803 23:14:09.147937   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 85/120
	I0803 23:14:10.149844   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 86/120
	I0803 23:14:11.151343   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 87/120
	I0803 23:14:12.152820   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 88/120
	I0803 23:14:13.154163   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 89/120
	I0803 23:14:14.156447   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 90/120
	I0803 23:14:15.158023   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 91/120
	I0803 23:14:16.159550   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 92/120
	I0803 23:14:17.161444   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 93/120
	I0803 23:14:18.163115   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 94/120
	I0803 23:14:19.164722   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 95/120
	I0803 23:14:20.166553   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 96/120
	I0803 23:14:21.168076   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 97/120
	I0803 23:14:22.170097   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 98/120
	I0803 23:14:23.171774   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 99/120
	I0803 23:14:24.173674   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 100/120
	I0803 23:14:25.174900   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 101/120
	I0803 23:14:26.176449   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 102/120
	I0803 23:14:27.177700   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 103/120
	I0803 23:14:28.180201   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 104/120
	I0803 23:14:29.181992   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 105/120
	I0803 23:14:30.183334   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 106/120
	I0803 23:14:31.184621   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 107/120
	I0803 23:14:32.186570   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 108/120
	I0803 23:14:33.188247   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 109/120
	I0803 23:14:34.190472   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 110/120
	I0803 23:14:35.191681   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 111/120
	I0803 23:14:36.193222   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 112/120
	I0803 23:14:37.194470   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 113/120
	I0803 23:14:38.195892   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 114/120
	I0803 23:14:39.197384   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 115/120
	I0803 23:14:40.198739   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 116/120
	I0803 23:14:41.200217   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 117/120
	I0803 23:14:42.201792   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 118/120
	I0803 23:14:43.203232   32520 main.go:141] libmachine: (ha-076508-m02) Waiting for machine to stop 119/120
	I0803 23:14:44.203919   32520 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0803 23:14:44.204077   32520 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-076508 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr: exit status 3 (19.24131034s)

                                                
                                                
-- stdout --
	ha-076508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-076508-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:14:44.247682   32971 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:14:44.247790   32971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:14:44.247802   32971 out.go:304] Setting ErrFile to fd 2...
	I0803 23:14:44.247808   32971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:14:44.248014   32971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:14:44.248218   32971 out.go:298] Setting JSON to false
	I0803 23:14:44.248246   32971 mustload.go:65] Loading cluster: ha-076508
	I0803 23:14:44.248345   32971 notify.go:220] Checking for updates...
	I0803 23:14:44.248697   32971 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:14:44.248715   32971 status.go:255] checking status of ha-076508 ...
	I0803 23:14:44.249172   32971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:14:44.249226   32971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:14:44.264312   32971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I0803 23:14:44.264790   32971 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:14:44.265464   32971 main.go:141] libmachine: Using API Version  1
	I0803 23:14:44.265490   32971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:14:44.265906   32971 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:14:44.266131   32971 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:14:44.268029   32971 status.go:330] ha-076508 host status = "Running" (err=<nil>)
	I0803 23:14:44.268062   32971 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:14:44.268365   32971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:14:44.268403   32971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:14:44.283154   32971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45295
	I0803 23:14:44.283663   32971 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:14:44.284182   32971 main.go:141] libmachine: Using API Version  1
	I0803 23:14:44.284229   32971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:14:44.284563   32971 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:14:44.284772   32971 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:14:44.287620   32971 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:14:44.288093   32971 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:14:44.288122   32971 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:14:44.288204   32971 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:14:44.288617   32971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:14:44.288657   32971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:14:44.305259   32971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0803 23:14:44.305719   32971 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:14:44.306190   32971 main.go:141] libmachine: Using API Version  1
	I0803 23:14:44.306211   32971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:14:44.306501   32971 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:14:44.306687   32971 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:14:44.306878   32971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:14:44.306916   32971 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:14:44.309752   32971 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:14:44.310194   32971 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:14:44.310215   32971 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:14:44.310339   32971 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:14:44.310507   32971 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:14:44.310672   32971 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:14:44.310797   32971 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:14:44.399689   32971 ssh_runner.go:195] Run: systemctl --version
	I0803 23:14:44.406862   32971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:14:44.424768   32971 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:14:44.424794   32971 api_server.go:166] Checking apiserver status ...
	I0803 23:14:44.424838   32971 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:14:44.441339   32971 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W0803 23:14:44.451376   32971 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:14:44.451444   32971 ssh_runner.go:195] Run: ls
	I0803 23:14:44.455999   32971 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:14:44.460174   32971 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:14:44.460193   32971 status.go:422] ha-076508 apiserver status = Running (err=<nil>)
	I0803 23:14:44.460202   32971 status.go:257] ha-076508 status: &{Name:ha-076508 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:14:44.460220   32971 status.go:255] checking status of ha-076508-m02 ...
	I0803 23:14:44.460487   32971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:14:44.460518   32971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:14:44.475458   32971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42113
	I0803 23:14:44.475904   32971 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:14:44.476382   32971 main.go:141] libmachine: Using API Version  1
	I0803 23:14:44.476404   32971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:14:44.476733   32971 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:14:44.476910   32971 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:14:44.478515   32971 status.go:330] ha-076508-m02 host status = "Running" (err=<nil>)
	I0803 23:14:44.478542   32971 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:14:44.478820   32971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:14:44.478855   32971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:14:44.495505   32971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38923
	I0803 23:14:44.495959   32971 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:14:44.496441   32971 main.go:141] libmachine: Using API Version  1
	I0803 23:14:44.496460   32971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:14:44.496782   32971 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:14:44.496992   32971 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:14:44.500580   32971 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:14:44.501044   32971 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:14:44.501068   32971 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:14:44.501309   32971 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:14:44.501623   32971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:14:44.501662   32971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:14:44.517207   32971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41447
	I0803 23:14:44.517717   32971 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:14:44.518226   32971 main.go:141] libmachine: Using API Version  1
	I0803 23:14:44.518246   32971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:14:44.518577   32971 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:14:44.518771   32971 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:14:44.518969   32971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:14:44.518997   32971 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:14:44.521605   32971 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:14:44.522057   32971 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:14:44.522081   32971 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:14:44.522242   32971 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:14:44.522410   32971 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:14:44.522567   32971 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:14:44.522724   32971 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	W0803 23:15:03.073631   32971 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0803 23:15:03.073738   32971 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0803 23:15:03.073753   32971 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:03.073761   32971 status.go:257] ha-076508-m02 status: &{Name:ha-076508-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:15:03.073777   32971 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:03.073783   32971 status.go:255] checking status of ha-076508-m03 ...
	I0803 23:15:03.074233   32971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:03.074275   32971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:03.089558   32971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I0803 23:15:03.090136   32971 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:03.090618   32971 main.go:141] libmachine: Using API Version  1
	I0803 23:15:03.090645   32971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:03.090957   32971 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:03.091187   32971 main.go:141] libmachine: (ha-076508-m03) Calling .GetState
	I0803 23:15:03.092853   32971 status.go:330] ha-076508-m03 host status = "Running" (err=<nil>)
	I0803 23:15:03.092870   32971 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:03.093188   32971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:03.093283   32971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:03.108316   32971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0803 23:15:03.108873   32971 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:03.109330   32971 main.go:141] libmachine: Using API Version  1
	I0803 23:15:03.109363   32971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:03.109715   32971 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:03.109938   32971 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:15:03.113204   32971 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:03.113609   32971 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:03.113636   32971 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:03.113782   32971 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:03.114098   32971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:03.114142   32971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:03.131429   32971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43699
	I0803 23:15:03.131971   32971 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:03.132538   32971 main.go:141] libmachine: Using API Version  1
	I0803 23:15:03.132569   32971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:03.132922   32971 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:03.133168   32971 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:15:03.133422   32971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:03.133449   32971 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:15:03.136612   32971 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:03.137109   32971 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:03.137129   32971 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:03.137288   32971 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:15:03.137473   32971 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:15:03.137753   32971 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:15:03.137909   32971 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:15:03.220459   32971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:03.238289   32971 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:03.238316   32971 api_server.go:166] Checking apiserver status ...
	I0803 23:15:03.238355   32971 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:03.253532   32971 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup
	W0803 23:15:03.263105   32971 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:03.263179   32971 ssh_runner.go:195] Run: ls
	I0803 23:15:03.268580   32971 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:03.275618   32971 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:03.275654   32971 status.go:422] ha-076508-m03 apiserver status = Running (err=<nil>)
	I0803 23:15:03.275665   32971 status.go:257] ha-076508-m03 status: &{Name:ha-076508-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:03.275682   32971 status.go:255] checking status of ha-076508-m04 ...
	I0803 23:15:03.276029   32971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:03.276071   32971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:03.292628   32971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33303
	I0803 23:15:03.293066   32971 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:03.293550   32971 main.go:141] libmachine: Using API Version  1
	I0803 23:15:03.293577   32971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:03.293921   32971 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:03.294169   32971 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:15:03.296158   32971 status.go:330] ha-076508-m04 host status = "Running" (err=<nil>)
	I0803 23:15:03.296176   32971 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:03.296452   32971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:03.296493   32971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:03.311497   32971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I0803 23:15:03.311921   32971 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:03.312381   32971 main.go:141] libmachine: Using API Version  1
	I0803 23:15:03.312401   32971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:03.312685   32971 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:03.312929   32971 main.go:141] libmachine: (ha-076508-m04) Calling .GetIP
	I0803 23:15:03.316119   32971 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:03.316724   32971 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:03.316753   32971 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:03.316914   32971 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:03.317221   32971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:03.317267   32971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:03.332804   32971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41339
	I0803 23:15:03.333272   32971 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:03.333755   32971 main.go:141] libmachine: Using API Version  1
	I0803 23:15:03.333777   32971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:03.334110   32971 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:03.334302   32971 main.go:141] libmachine: (ha-076508-m04) Calling .DriverName
	I0803 23:15:03.334482   32971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:03.334501   32971 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHHostname
	I0803 23:15:03.337635   32971 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:03.338255   32971 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:03.338298   32971 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:03.338484   32971 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHPort
	I0803 23:15:03.338640   32971 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHKeyPath
	I0803 23:15:03.338891   32971 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHUsername
	I0803 23:15:03.339067   32971 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m04/id_rsa Username:docker}
	I0803 23:15:03.426557   32971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:03.445000   32971 status.go:257] ha-076508-m04 status: &{Name:ha-076508-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-076508 -n ha-076508
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-076508 logs -n 25: (1.495324632s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile214764297/001/cp-test_ha-076508-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508:/home/docker/cp-test_ha-076508-m03_ha-076508.txt                      |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508 sudo cat                                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m03_ha-076508.txt                                |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m02:/home/docker/cp-test_ha-076508-m03_ha-076508-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m02 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m03_ha-076508-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04:/home/docker/cp-test_ha-076508-m03_ha-076508-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m04 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m03_ha-076508-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-076508 cp testdata/cp-test.txt                                               | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile214764297/001/cp-test_ha-076508-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508:/home/docker/cp-test_ha-076508-m04_ha-076508.txt                      |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508 sudo cat                                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m04_ha-076508.txt                                |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m02:/home/docker/cp-test_ha-076508-m04_ha-076508-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m02 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m04_ha-076508-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03:/home/docker/cp-test_ha-076508-m04_ha-076508-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m03 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m04_ha-076508-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-076508 node stop m02 -v=7                                                    | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:06:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:06:47.489970   28167 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:06:47.490222   28167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:06:47.490230   28167 out.go:304] Setting ErrFile to fd 2...
	I0803 23:06:47.490240   28167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:06:47.490404   28167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:06:47.490927   28167 out.go:298] Setting JSON to false
	I0803 23:06:47.491735   28167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2951,"bootTime":1722723456,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:06:47.491789   28167 start.go:139] virtualization: kvm guest
	I0803 23:06:47.494029   28167 out.go:177] * [ha-076508] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:06:47.495467   28167 notify.go:220] Checking for updates...
	I0803 23:06:47.495541   28167 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:06:47.497026   28167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:06:47.498858   28167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:06:47.500281   28167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:06:47.501865   28167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:06:47.503382   28167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:06:47.504936   28167 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:06:47.540276   28167 out.go:177] * Using the kvm2 driver based on user configuration
	I0803 23:06:47.541636   28167 start.go:297] selected driver: kvm2
	I0803 23:06:47.541650   28167 start.go:901] validating driver "kvm2" against <nil>
	I0803 23:06:47.541665   28167 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:06:47.542627   28167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:06:47.542715   28167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:06:47.557706   28167 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:06:47.557763   28167 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 23:06:47.558059   28167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:06:47.558133   28167 cni.go:84] Creating CNI manager for ""
	I0803 23:06:47.558145   28167 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0803 23:06:47.558159   28167 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0803 23:06:47.558221   28167 start.go:340] cluster config:
	{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0803 23:06:47.558344   28167 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:06:47.560165   28167 out.go:177] * Starting "ha-076508" primary control-plane node in "ha-076508" cluster
	I0803 23:06:47.561417   28167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:06:47.561457   28167 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:06:47.561465   28167 cache.go:56] Caching tarball of preloaded images
	I0803 23:06:47.561558   28167 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:06:47.561573   28167 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:06:47.561866   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:06:47.561887   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json: {Name:mke12aaae1c6c743b80b12da59b5b860742452dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:06:47.562034   28167 start.go:360] acquireMachinesLock for ha-076508: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:06:47.562069   28167 start.go:364] duration metric: took 19.4µs to acquireMachinesLock for "ha-076508"
	I0803 23:06:47.562091   28167 start.go:93] Provisioning new machine with config: &{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:06:47.562165   28167 start.go:125] createHost starting for "" (driver="kvm2")
	I0803 23:06:47.563789   28167 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:06:47.563905   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:06:47.563951   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:06:47.578194   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
	I0803 23:06:47.578649   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:06:47.579128   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:06:47.579147   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:06:47.579513   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:06:47.579672   28167 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:06:47.579781   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:06:47.579969   28167 start.go:159] libmachine.API.Create for "ha-076508" (driver="kvm2")
	I0803 23:06:47.580000   28167 client.go:168] LocalClient.Create starting
	I0803 23:06:47.580039   28167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0803 23:06:47.580071   28167 main.go:141] libmachine: Decoding PEM data...
	I0803 23:06:47.580086   28167 main.go:141] libmachine: Parsing certificate...
	I0803 23:06:47.580153   28167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0803 23:06:47.580172   28167 main.go:141] libmachine: Decoding PEM data...
	I0803 23:06:47.580185   28167 main.go:141] libmachine: Parsing certificate...
	I0803 23:06:47.580198   28167 main.go:141] libmachine: Running pre-create checks...
	I0803 23:06:47.580210   28167 main.go:141] libmachine: (ha-076508) Calling .PreCreateCheck
	I0803 23:06:47.580557   28167 main.go:141] libmachine: (ha-076508) Calling .GetConfigRaw
	I0803 23:06:47.580958   28167 main.go:141] libmachine: Creating machine...
	I0803 23:06:47.580971   28167 main.go:141] libmachine: (ha-076508) Calling .Create
	I0803 23:06:47.581080   28167 main.go:141] libmachine: (ha-076508) Creating KVM machine...
	I0803 23:06:47.582143   28167 main.go:141] libmachine: (ha-076508) DBG | found existing default KVM network
	I0803 23:06:47.582776   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:47.582645   28190 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0803 23:06:47.582798   28167 main.go:141] libmachine: (ha-076508) DBG | created network xml: 
	I0803 23:06:47.582816   28167 main.go:141] libmachine: (ha-076508) DBG | <network>
	I0803 23:06:47.582832   28167 main.go:141] libmachine: (ha-076508) DBG |   <name>mk-ha-076508</name>
	I0803 23:06:47.582843   28167 main.go:141] libmachine: (ha-076508) DBG |   <dns enable='no'/>
	I0803 23:06:47.582852   28167 main.go:141] libmachine: (ha-076508) DBG |   
	I0803 23:06:47.582858   28167 main.go:141] libmachine: (ha-076508) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0803 23:06:47.582865   28167 main.go:141] libmachine: (ha-076508) DBG |     <dhcp>
	I0803 23:06:47.582871   28167 main.go:141] libmachine: (ha-076508) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0803 23:06:47.582878   28167 main.go:141] libmachine: (ha-076508) DBG |     </dhcp>
	I0803 23:06:47.582884   28167 main.go:141] libmachine: (ha-076508) DBG |   </ip>
	I0803 23:06:47.582888   28167 main.go:141] libmachine: (ha-076508) DBG |   
	I0803 23:06:47.582894   28167 main.go:141] libmachine: (ha-076508) DBG | </network>
	I0803 23:06:47.582900   28167 main.go:141] libmachine: (ha-076508) DBG | 
	I0803 23:06:47.587879   28167 main.go:141] libmachine: (ha-076508) DBG | trying to create private KVM network mk-ha-076508 192.168.39.0/24...
	I0803 23:06:47.651816   28167 main.go:141] libmachine: (ha-076508) DBG | private KVM network mk-ha-076508 192.168.39.0/24 created
	I0803 23:06:47.651871   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:47.651776   28190 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:06:47.651884   28167 main.go:141] libmachine: (ha-076508) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508 ...
	I0803 23:06:47.651905   28167 main.go:141] libmachine: (ha-076508) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:06:47.651921   28167 main.go:141] libmachine: (ha-076508) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:06:47.895582   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:47.895470   28190 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa...
	I0803 23:06:47.984578   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:47.984431   28190 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/ha-076508.rawdisk...
	I0803 23:06:47.984607   28167 main.go:141] libmachine: (ha-076508) DBG | Writing magic tar header
	I0803 23:06:47.984622   28167 main.go:141] libmachine: (ha-076508) DBG | Writing SSH key tar header
	I0803 23:06:47.984667   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:47.984541   28190 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508 ...
	I0803 23:06:47.984680   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508
	I0803 23:06:47.984697   28167 main.go:141] libmachine: (ha-076508) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508 (perms=drwx------)
	I0803 23:06:47.984714   28167 main.go:141] libmachine: (ha-076508) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:06:47.984737   28167 main.go:141] libmachine: (ha-076508) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0803 23:06:47.984750   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0803 23:06:47.984759   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:06:47.984765   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0803 23:06:47.984774   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:06:47.984786   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:06:47.984799   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home
	I0803 23:06:47.984811   28167 main.go:141] libmachine: (ha-076508) DBG | Skipping /home - not owner
	I0803 23:06:47.984829   28167 main.go:141] libmachine: (ha-076508) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0803 23:06:47.984848   28167 main.go:141] libmachine: (ha-076508) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:06:47.984859   28167 main.go:141] libmachine: (ha-076508) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:06:47.984870   28167 main.go:141] libmachine: (ha-076508) Creating domain...
	I0803 23:06:47.985932   28167 main.go:141] libmachine: (ha-076508) define libvirt domain using xml: 
	I0803 23:06:47.985954   28167 main.go:141] libmachine: (ha-076508) <domain type='kvm'>
	I0803 23:06:47.985976   28167 main.go:141] libmachine: (ha-076508)   <name>ha-076508</name>
	I0803 23:06:47.985990   28167 main.go:141] libmachine: (ha-076508)   <memory unit='MiB'>2200</memory>
	I0803 23:06:47.986002   28167 main.go:141] libmachine: (ha-076508)   <vcpu>2</vcpu>
	I0803 23:06:47.986012   28167 main.go:141] libmachine: (ha-076508)   <features>
	I0803 23:06:47.986026   28167 main.go:141] libmachine: (ha-076508)     <acpi/>
	I0803 23:06:47.986036   28167 main.go:141] libmachine: (ha-076508)     <apic/>
	I0803 23:06:47.986062   28167 main.go:141] libmachine: (ha-076508)     <pae/>
	I0803 23:06:47.986081   28167 main.go:141] libmachine: (ha-076508)     
	I0803 23:06:47.986088   28167 main.go:141] libmachine: (ha-076508)   </features>
	I0803 23:06:47.986105   28167 main.go:141] libmachine: (ha-076508)   <cpu mode='host-passthrough'>
	I0803 23:06:47.986113   28167 main.go:141] libmachine: (ha-076508)   
	I0803 23:06:47.986117   28167 main.go:141] libmachine: (ha-076508)   </cpu>
	I0803 23:06:47.986124   28167 main.go:141] libmachine: (ha-076508)   <os>
	I0803 23:06:47.986129   28167 main.go:141] libmachine: (ha-076508)     <type>hvm</type>
	I0803 23:06:47.986136   28167 main.go:141] libmachine: (ha-076508)     <boot dev='cdrom'/>
	I0803 23:06:47.986142   28167 main.go:141] libmachine: (ha-076508)     <boot dev='hd'/>
	I0803 23:06:47.986148   28167 main.go:141] libmachine: (ha-076508)     <bootmenu enable='no'/>
	I0803 23:06:47.986156   28167 main.go:141] libmachine: (ha-076508)   </os>
	I0803 23:06:47.986175   28167 main.go:141] libmachine: (ha-076508)   <devices>
	I0803 23:06:47.986201   28167 main.go:141] libmachine: (ha-076508)     <disk type='file' device='cdrom'>
	I0803 23:06:47.986217   28167 main.go:141] libmachine: (ha-076508)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/boot2docker.iso'/>
	I0803 23:06:47.986233   28167 main.go:141] libmachine: (ha-076508)       <target dev='hdc' bus='scsi'/>
	I0803 23:06:47.986262   28167 main.go:141] libmachine: (ha-076508)       <readonly/>
	I0803 23:06:47.986280   28167 main.go:141] libmachine: (ha-076508)     </disk>
	I0803 23:06:47.986295   28167 main.go:141] libmachine: (ha-076508)     <disk type='file' device='disk'>
	I0803 23:06:47.986311   28167 main.go:141] libmachine: (ha-076508)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:06:47.986327   28167 main.go:141] libmachine: (ha-076508)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/ha-076508.rawdisk'/>
	I0803 23:06:47.986338   28167 main.go:141] libmachine: (ha-076508)       <target dev='hda' bus='virtio'/>
	I0803 23:06:47.986350   28167 main.go:141] libmachine: (ha-076508)     </disk>
	I0803 23:06:47.986360   28167 main.go:141] libmachine: (ha-076508)     <interface type='network'>
	I0803 23:06:47.986372   28167 main.go:141] libmachine: (ha-076508)       <source network='mk-ha-076508'/>
	I0803 23:06:47.986382   28167 main.go:141] libmachine: (ha-076508)       <model type='virtio'/>
	I0803 23:06:47.986392   28167 main.go:141] libmachine: (ha-076508)     </interface>
	I0803 23:06:47.986410   28167 main.go:141] libmachine: (ha-076508)     <interface type='network'>
	I0803 23:06:47.986426   28167 main.go:141] libmachine: (ha-076508)       <source network='default'/>
	I0803 23:06:47.986436   28167 main.go:141] libmachine: (ha-076508)       <model type='virtio'/>
	I0803 23:06:47.986443   28167 main.go:141] libmachine: (ha-076508)     </interface>
	I0803 23:06:47.986452   28167 main.go:141] libmachine: (ha-076508)     <serial type='pty'>
	I0803 23:06:47.986462   28167 main.go:141] libmachine: (ha-076508)       <target port='0'/>
	I0803 23:06:47.986474   28167 main.go:141] libmachine: (ha-076508)     </serial>
	I0803 23:06:47.986484   28167 main.go:141] libmachine: (ha-076508)     <console type='pty'>
	I0803 23:06:47.986507   28167 main.go:141] libmachine: (ha-076508)       <target type='serial' port='0'/>
	I0803 23:06:47.986526   28167 main.go:141] libmachine: (ha-076508)     </console>
	I0803 23:06:47.986536   28167 main.go:141] libmachine: (ha-076508)     <rng model='virtio'>
	I0803 23:06:47.986549   28167 main.go:141] libmachine: (ha-076508)       <backend model='random'>/dev/random</backend>
	I0803 23:06:47.986559   28167 main.go:141] libmachine: (ha-076508)     </rng>
	I0803 23:06:47.986566   28167 main.go:141] libmachine: (ha-076508)     
	I0803 23:06:47.986580   28167 main.go:141] libmachine: (ha-076508)     
	I0803 23:06:47.986591   28167 main.go:141] libmachine: (ha-076508)   </devices>
	I0803 23:06:47.986600   28167 main.go:141] libmachine: (ha-076508) </domain>
	I0803 23:06:47.986612   28167 main.go:141] libmachine: (ha-076508) 
	I0803 23:06:47.990359   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:ee:29:e0 in network default
	I0803 23:06:47.990927   28167 main.go:141] libmachine: (ha-076508) Ensuring networks are active...
	I0803 23:06:47.990950   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:47.991615   28167 main.go:141] libmachine: (ha-076508) Ensuring network default is active
	I0803 23:06:47.991947   28167 main.go:141] libmachine: (ha-076508) Ensuring network mk-ha-076508 is active
	I0803 23:06:47.992429   28167 main.go:141] libmachine: (ha-076508) Getting domain xml...
	I0803 23:06:47.993139   28167 main.go:141] libmachine: (ha-076508) Creating domain...
	I0803 23:06:49.172673   28167 main.go:141] libmachine: (ha-076508) Waiting to get IP...
	I0803 23:06:49.173616   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:49.174072   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:49.174094   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:49.174055   28190 retry.go:31] will retry after 299.048685ms: waiting for machine to come up
	I0803 23:06:49.474639   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:49.475036   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:49.475065   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:49.474985   28190 retry.go:31] will retry after 364.349968ms: waiting for machine to come up
	I0803 23:06:49.840548   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:49.841056   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:49.841086   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:49.841028   28190 retry.go:31] will retry after 363.489429ms: waiting for machine to come up
	I0803 23:06:50.206557   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:50.206963   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:50.206989   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:50.206887   28190 retry.go:31] will retry after 401.199995ms: waiting for machine to come up
	I0803 23:06:50.609300   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:50.609723   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:50.609756   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:50.609668   28190 retry.go:31] will retry after 523.568123ms: waiting for machine to come up
	I0803 23:06:51.134353   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:51.134834   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:51.134858   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:51.134771   28190 retry.go:31] will retry after 668.196356ms: waiting for machine to come up
	I0803 23:06:51.804536   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:51.804899   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:51.804938   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:51.804860   28190 retry.go:31] will retry after 746.059023ms: waiting for machine to come up
	I0803 23:06:52.552683   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:52.553161   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:52.553186   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:52.553111   28190 retry.go:31] will retry after 983.956736ms: waiting for machine to come up
	I0803 23:06:53.538479   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:53.538881   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:53.538901   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:53.538827   28190 retry.go:31] will retry after 1.575987073s: waiting for machine to come up
	I0803 23:06:55.116547   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:55.116933   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:55.116958   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:55.116890   28190 retry.go:31] will retry after 1.6753366s: waiting for machine to come up
	I0803 23:06:56.794713   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:56.795125   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:56.795151   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:56.795100   28190 retry.go:31] will retry after 1.978262602s: waiting for machine to come up
	I0803 23:06:58.775186   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:58.775682   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:58.775699   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:58.775638   28190 retry.go:31] will retry after 2.58504789s: waiting for machine to come up
	I0803 23:07:01.364479   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:01.364842   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:07:01.364866   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:07:01.364802   28190 retry.go:31] will retry after 3.09859595s: waiting for machine to come up
	I0803 23:07:04.465537   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:04.465910   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:07:04.465931   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:07:04.465871   28190 retry.go:31] will retry after 4.249791833s: waiting for machine to come up
	I0803 23:07:08.717607   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:08.718056   28167 main.go:141] libmachine: (ha-076508) Found IP for machine: 192.168.39.154
	I0803 23:07:08.718075   28167 main.go:141] libmachine: (ha-076508) Reserving static IP address...
	I0803 23:07:08.718088   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has current primary IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:08.718437   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find host DHCP lease matching {name: "ha-076508", mac: "52:54:00:04:c7:ad", ip: "192.168.39.154"} in network mk-ha-076508
	I0803 23:07:08.791835   28167 main.go:141] libmachine: (ha-076508) Reserved static IP address: 192.168.39.154
	I0803 23:07:08.791856   28167 main.go:141] libmachine: (ha-076508) Waiting for SSH to be available...
	I0803 23:07:08.791863   28167 main.go:141] libmachine: (ha-076508) DBG | Getting to WaitForSSH function...
	I0803 23:07:08.794443   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:08.794792   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508
	I0803 23:07:08.794816   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find defined IP address of network mk-ha-076508 interface with MAC address 52:54:00:04:c7:ad
	I0803 23:07:08.794991   28167 main.go:141] libmachine: (ha-076508) DBG | Using SSH client type: external
	I0803 23:07:08.795016   28167 main.go:141] libmachine: (ha-076508) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa (-rw-------)
	I0803 23:07:08.795050   28167 main.go:141] libmachine: (ha-076508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:07:08.795057   28167 main.go:141] libmachine: (ha-076508) DBG | About to run SSH command:
	I0803 23:07:08.795066   28167 main.go:141] libmachine: (ha-076508) DBG | exit 0
	I0803 23:07:08.799217   28167 main.go:141] libmachine: (ha-076508) DBG | SSH cmd err, output: exit status 255: 
	I0803 23:07:08.799237   28167 main.go:141] libmachine: (ha-076508) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0803 23:07:08.799246   28167 main.go:141] libmachine: (ha-076508) DBG | command : exit 0
	I0803 23:07:08.799253   28167 main.go:141] libmachine: (ha-076508) DBG | err     : exit status 255
	I0803 23:07:08.799264   28167 main.go:141] libmachine: (ha-076508) DBG | output  : 
	I0803 23:07:11.801425   28167 main.go:141] libmachine: (ha-076508) DBG | Getting to WaitForSSH function...
	I0803 23:07:11.803779   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:11.804325   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:11.804371   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:11.804535   28167 main.go:141] libmachine: (ha-076508) DBG | Using SSH client type: external
	I0803 23:07:11.804565   28167 main.go:141] libmachine: (ha-076508) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa (-rw-------)
	I0803 23:07:11.804586   28167 main.go:141] libmachine: (ha-076508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:07:11.804598   28167 main.go:141] libmachine: (ha-076508) DBG | About to run SSH command:
	I0803 23:07:11.804618   28167 main.go:141] libmachine: (ha-076508) DBG | exit 0
	I0803 23:07:11.933600   28167 main.go:141] libmachine: (ha-076508) DBG | SSH cmd err, output: <nil>: 
	I0803 23:07:11.933845   28167 main.go:141] libmachine: (ha-076508) KVM machine creation complete!
	I0803 23:07:11.934170   28167 main.go:141] libmachine: (ha-076508) Calling .GetConfigRaw
	I0803 23:07:11.934761   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:11.935003   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:11.935207   28167 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:07:11.935223   28167 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:07:11.936615   28167 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:07:11.936629   28167 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:07:11.936634   28167 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:07:11.936640   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:11.939026   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:11.939414   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:11.939441   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:11.939597   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:11.939771   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:11.939942   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:11.940107   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:11.940274   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:07:11.940529   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:07:11.940546   28167 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:07:12.049051   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:07:12.049076   28167 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:07:12.049085   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:12.052089   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.052517   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.052539   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.052764   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:12.052954   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.053105   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.053271   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:12.053468   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:07:12.053682   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:07:12.053695   28167 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:07:12.162371   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:07:12.162443   28167 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:07:12.162453   28167 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:07:12.162462   28167 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:07:12.162766   28167 buildroot.go:166] provisioning hostname "ha-076508"
	I0803 23:07:12.162795   28167 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:07:12.163114   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:12.166049   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.166444   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.166475   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.166632   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:12.166805   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.166994   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.167126   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:12.167297   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:07:12.167478   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:07:12.167494   28167 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076508 && echo "ha-076508" | sudo tee /etc/hostname
	I0803 23:07:12.292153   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076508
	
	I0803 23:07:12.292176   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:12.295092   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.295463   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.295489   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.295638   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:12.295830   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.295976   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.296089   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:12.296243   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:07:12.296441   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:07:12.296458   28167 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076508/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:07:12.414678   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:07:12.414705   28167 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 23:07:12.414725   28167 buildroot.go:174] setting up certificates
	I0803 23:07:12.414737   28167 provision.go:84] configureAuth start
	I0803 23:07:12.414749   28167 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:07:12.415054   28167 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:07:12.417608   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.417930   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.417956   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.418066   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:12.420424   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.420899   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.420922   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.421045   28167 provision.go:143] copyHostCerts
	I0803 23:07:12.421075   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:07:12.421132   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0803 23:07:12.421142   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:07:12.421225   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 23:07:12.421365   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:07:12.421395   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0803 23:07:12.421405   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:07:12.421449   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 23:07:12.421617   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:07:12.421652   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0803 23:07:12.421661   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:07:12.421712   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 23:07:12.421792   28167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.ha-076508 san=[127.0.0.1 192.168.39.154 ha-076508 localhost minikube]
	I0803 23:07:12.819787   28167 provision.go:177] copyRemoteCerts
	I0803 23:07:12.819849   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:07:12.819871   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:12.822738   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.823158   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.823190   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.823305   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:12.823489   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.823678   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:12.823831   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:07:12.907870   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:07:12.907938   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 23:07:12.932838   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:07:12.932923   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0803 23:07:12.957956   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:07:12.958024   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0803 23:07:12.983173   28167 provision.go:87] duration metric: took 568.422623ms to configureAuth
	I0803 23:07:12.983203   28167 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:07:12.983362   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:07:12.983432   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:12.985912   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.986294   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.986324   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.986487   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:12.986682   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.986874   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.986971   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:12.987122   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:07:12.987281   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:07:12.987297   28167 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:07:13.258685   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:07:13.258721   28167 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:07:13.258732   28167 main.go:141] libmachine: (ha-076508) Calling .GetURL
	I0803 23:07:13.260040   28167 main.go:141] libmachine: (ha-076508) DBG | Using libvirt version 6000000
	I0803 23:07:13.262246   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.262620   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.262649   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.262820   28167 main.go:141] libmachine: Docker is up and running!
	I0803 23:07:13.262834   28167 main.go:141] libmachine: Reticulating splines...
	I0803 23:07:13.262841   28167 client.go:171] duration metric: took 25.682831089s to LocalClient.Create
	I0803 23:07:13.262862   28167 start.go:167] duration metric: took 25.682893298s to libmachine.API.Create "ha-076508"
	I0803 23:07:13.262870   28167 start.go:293] postStartSetup for "ha-076508" (driver="kvm2")
	I0803 23:07:13.262880   28167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:07:13.262896   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:13.263137   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:07:13.263159   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:13.265085   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.265469   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.265497   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.265630   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:13.265806   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:13.265943   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:13.266114   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:07:13.352825   28167 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:07:13.357277   28167 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:07:13.357300   28167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 23:07:13.357375   28167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 23:07:13.357448   28167 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0803 23:07:13.357458   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0803 23:07:13.357542   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:07:13.368303   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:07:13.394754   28167 start.go:296] duration metric: took 131.872279ms for postStartSetup
	I0803 23:07:13.394801   28167 main.go:141] libmachine: (ha-076508) Calling .GetConfigRaw
	I0803 23:07:13.395357   28167 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:07:13.397766   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.398067   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.398093   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.398287   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:07:13.398476   28167 start.go:128] duration metric: took 25.836297699s to createHost
	I0803 23:07:13.398499   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:13.400608   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.400865   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.400892   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.401050   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:13.401230   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:13.401394   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:13.401513   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:13.401651   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:07:13.401817   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:07:13.401834   28167 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:07:13.514455   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722726433.492512851
	
	I0803 23:07:13.514477   28167 fix.go:216] guest clock: 1722726433.492512851
	I0803 23:07:13.514485   28167 fix.go:229] Guest: 2024-08-03 23:07:13.492512851 +0000 UTC Remote: 2024-08-03 23:07:13.398488875 +0000 UTC m=+25.941429857 (delta=94.023976ms)
	I0803 23:07:13.514520   28167 fix.go:200] guest clock delta is within tolerance: 94.023976ms
	I0803 23:07:13.514527   28167 start.go:83] releasing machines lock for "ha-076508", held for 25.952446969s
	I0803 23:07:13.514543   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:13.514834   28167 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:07:13.517401   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.517793   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.517815   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.517978   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:13.518494   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:13.518633   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:13.518709   28167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:07:13.518748   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:13.518833   28167 ssh_runner.go:195] Run: cat /version.json
	I0803 23:07:13.518855   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:13.521510   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.521708   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.521925   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.521948   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.522090   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.522110   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.522134   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:13.522304   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:13.522307   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:13.522472   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:13.522474   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:13.522662   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:13.522677   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:07:13.522810   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:07:13.620582   28167 ssh_runner.go:195] Run: systemctl --version
	I0803 23:07:13.626624   28167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:07:13.790848   28167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:07:13.796926   28167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:07:13.796988   28167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:07:13.814400   28167 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:07:13.814425   28167 start.go:495] detecting cgroup driver to use...
	I0803 23:07:13.814481   28167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:07:13.831090   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:07:13.846834   28167 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:07:13.846891   28167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:07:13.862395   28167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:07:13.879388   28167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:07:14.014543   28167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:07:14.171743   28167 docker.go:233] disabling docker service ...
	I0803 23:07:14.171799   28167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:07:14.187004   28167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:07:14.200675   28167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:07:14.313247   28167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:07:14.422410   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:07:14.437475   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:07:14.457628   28167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:07:14.457699   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.469513   28167 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:07:14.469645   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.482373   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.493984   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.505308   28167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:07:14.516663   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.528037   28167 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.546046   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.557885   28167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:07:14.568691   28167 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:07:14.568744   28167 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:07:14.583280   28167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:07:14.593878   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:07:14.701783   28167 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:07:14.855293   28167 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:07:14.855386   28167 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:07:14.861520   28167 start.go:563] Will wait 60s for crictl version
	I0803 23:07:14.861569   28167 ssh_runner.go:195] Run: which crictl
	I0803 23:07:14.865747   28167 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:07:14.906262   28167 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:07:14.906349   28167 ssh_runner.go:195] Run: crio --version
	I0803 23:07:14.934547   28167 ssh_runner.go:195] Run: crio --version
	I0803 23:07:14.964520   28167 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:07:14.965845   28167 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:07:14.968597   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:14.969165   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:14.969195   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:14.969466   28167 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:07:14.973838   28167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:07:14.987577   28167 kubeadm.go:883] updating cluster {Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:07:14.987669   28167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:07:14.987710   28167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:07:15.027512   28167 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0803 23:07:15.027595   28167 ssh_runner.go:195] Run: which lz4
	I0803 23:07:15.031844   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0803 23:07:15.031955   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0803 23:07:15.036494   28167 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 23:07:15.036528   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0803 23:07:16.510128   28167 crio.go:462] duration metric: took 1.478209536s to copy over tarball
	I0803 23:07:16.510209   28167 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 23:07:18.736437   28167 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.226201481s)
	I0803 23:07:18.736463   28167 crio.go:469] duration metric: took 2.226302648s to extract the tarball
	I0803 23:07:18.736472   28167 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 23:07:18.775687   28167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:07:18.821770   28167 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:07:18.821797   28167 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:07:18.821807   28167 kubeadm.go:934] updating node { 192.168.39.154 8443 v1.30.3 crio true true} ...
	I0803 23:07:18.821941   28167 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:07:18.822014   28167 ssh_runner.go:195] Run: crio config
	I0803 23:07:18.867888   28167 cni.go:84] Creating CNI manager for ""
	I0803 23:07:18.867905   28167 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0803 23:07:18.867918   28167 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:07:18.867938   28167 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-076508 NodeName:ha-076508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:07:18.868077   28167 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-076508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:07:18.868108   28167 kube-vip.go:115] generating kube-vip config ...
	I0803 23:07:18.868154   28167 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:07:18.885252   28167 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:07:18.885387   28167 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:07:18.885486   28167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:07:18.896065   28167 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:07:18.896128   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0803 23:07:18.906028   28167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0803 23:07:18.923637   28167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:07:18.940633   28167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0803 23:07:18.957557   28167 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0803 23:07:18.974793   28167 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:07:18.978897   28167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:07:18.991740   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:07:19.118712   28167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:07:19.136049   28167 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508 for IP: 192.168.39.154
	I0803 23:07:19.136070   28167 certs.go:194] generating shared ca certs ...
	I0803 23:07:19.136111   28167 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.136274   28167 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 23:07:19.136332   28167 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 23:07:19.136346   28167 certs.go:256] generating profile certs ...
	I0803 23:07:19.136410   28167 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key
	I0803 23:07:19.136427   28167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.crt with IP's: []
	I0803 23:07:19.399368   28167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.crt ...
	I0803 23:07:19.399399   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.crt: {Name:mk6c61cc1c71006c9038d48e8a7e1f6b49511ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.399595   28167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key ...
	I0803 23:07:19.399610   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key: {Name:mk95344414c61542ea81c8b8742957ef5d931958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.399714   28167 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.6dec06fe
	I0803 23:07:19.399732   28167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.6dec06fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.154 192.168.39.254]
	I0803 23:07:19.564196   28167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.6dec06fe ...
	I0803 23:07:19.564227   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.6dec06fe: {Name:mkbfa31a03e37b87508ca9c99c62a5672518f21d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.564406   28167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.6dec06fe ...
	I0803 23:07:19.564422   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.6dec06fe: {Name:mk7aded0581795aecb14ff48f72570c22d39bf16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.564514   28167 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.6dec06fe -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt
	I0803 23:07:19.564630   28167 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.6dec06fe -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key
	I0803 23:07:19.564726   28167 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key
	I0803 23:07:19.564746   28167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt with IP's: []
	I0803 23:07:19.643530   28167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt ...
	I0803 23:07:19.643561   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt: {Name:mkd930a11b608539f35e44a6b66f29dc5cce84b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.643739   28167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key ...
	I0803 23:07:19.643762   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key: {Name:mk676ce01dd626e5d9c0506670645a6d47a52163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.643874   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:07:19.643899   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:07:19.643915   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:07:19.643932   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:07:19.643951   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:07:19.643976   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:07:19.643994   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:07:19.644012   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:07:19.644087   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0803 23:07:19.644137   28167 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0803 23:07:19.644151   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 23:07:19.644188   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 23:07:19.644222   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:07:19.644254   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 23:07:19.644311   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:07:19.644382   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0803 23:07:19.644409   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:07:19.644428   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0803 23:07:19.645581   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:07:19.673209   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:07:19.699940   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:07:19.725941   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 23:07:19.751358   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0803 23:07:19.779048   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 23:07:19.809834   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:07:19.838255   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:07:19.867259   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0803 23:07:19.896216   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:07:19.942432   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0803 23:07:19.977596   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:07:19.995235   28167 ssh_runner.go:195] Run: openssl version
	I0803 23:07:20.001027   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0803 23:07:20.012012   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0803 23:07:20.016390   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:07:20.016446   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0803 23:07:20.022301   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:07:20.032815   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:07:20.043547   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:07:20.048128   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:07:20.048184   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:07:20.053963   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:07:20.065029   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0803 23:07:20.076923   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0803 23:07:20.081664   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:07:20.081729   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0803 23:07:20.087575   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0803 23:07:20.098494   28167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:07:20.102837   28167 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:07:20.102898   28167 kubeadm.go:392] StartCluster: {Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:07:20.102967   28167 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:07:20.103041   28167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:07:20.139202   28167 cri.go:89] found id: ""
	I0803 23:07:20.139275   28167 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 23:07:20.149748   28167 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 23:07:20.159745   28167 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 23:07:20.169671   28167 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 23:07:20.169689   28167 kubeadm.go:157] found existing configuration files:
	
	I0803 23:07:20.169727   28167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0803 23:07:20.179245   28167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 23:07:20.179295   28167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 23:07:20.189110   28167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0803 23:07:20.198522   28167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 23:07:20.198585   28167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 23:07:20.208568   28167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0803 23:07:20.217742   28167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 23:07:20.217793   28167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 23:07:20.227256   28167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0803 23:07:20.236550   28167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 23:07:20.236596   28167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 23:07:20.246261   28167 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 23:07:20.487650   28167 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 23:07:31.490269   28167 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0803 23:07:31.490343   28167 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 23:07:31.490439   28167 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 23:07:31.490548   28167 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 23:07:31.490651   28167 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 23:07:31.490748   28167 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 23:07:31.492491   28167 out.go:204]   - Generating certificates and keys ...
	I0803 23:07:31.492578   28167 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 23:07:31.492650   28167 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 23:07:31.492733   28167 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0803 23:07:31.492811   28167 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0803 23:07:31.492896   28167 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0803 23:07:31.492966   28167 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0803 23:07:31.493046   28167 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0803 23:07:31.493181   28167 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-076508 localhost] and IPs [192.168.39.154 127.0.0.1 ::1]
	I0803 23:07:31.493273   28167 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0803 23:07:31.493450   28167 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-076508 localhost] and IPs [192.168.39.154 127.0.0.1 ::1]
	I0803 23:07:31.493549   28167 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0803 23:07:31.493649   28167 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0803 23:07:31.493687   28167 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0803 23:07:31.493734   28167 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 23:07:31.493776   28167 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 23:07:31.493823   28167 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0803 23:07:31.493880   28167 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 23:07:31.493959   28167 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 23:07:31.494024   28167 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 23:07:31.494134   28167 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 23:07:31.494225   28167 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 23:07:31.495749   28167 out.go:204]   - Booting up control plane ...
	I0803 23:07:31.495834   28167 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 23:07:31.495902   28167 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 23:07:31.495980   28167 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 23:07:31.496079   28167 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 23:07:31.496161   28167 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 23:07:31.496202   28167 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 23:07:31.496319   28167 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0803 23:07:31.496410   28167 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0803 23:07:31.496471   28167 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001670926s
	I0803 23:07:31.496568   28167 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0803 23:07:31.496645   28167 kubeadm.go:310] [api-check] The API server is healthy after 5.827086685s
	I0803 23:07:31.496769   28167 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 23:07:31.496896   28167 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 23:07:31.496986   28167 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 23:07:31.497130   28167 kubeadm.go:310] [mark-control-plane] Marking the node ha-076508 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 23:07:31.497223   28167 kubeadm.go:310] [bootstrap-token] Using token: y24y8s.6ynp5uqn81rz378h
	I0803 23:07:31.499530   28167 out.go:204]   - Configuring RBAC rules ...
	I0803 23:07:31.499637   28167 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 23:07:31.499718   28167 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 23:07:31.499853   28167 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 23:07:31.499970   28167 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 23:07:31.500067   28167 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 23:07:31.500142   28167 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 23:07:31.500247   28167 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 23:07:31.500324   28167 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 23:07:31.500402   28167 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 23:07:31.500411   28167 kubeadm.go:310] 
	I0803 23:07:31.500490   28167 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 23:07:31.500498   28167 kubeadm.go:310] 
	I0803 23:07:31.500602   28167 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 23:07:31.500613   28167 kubeadm.go:310] 
	I0803 23:07:31.500644   28167 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 23:07:31.500693   28167 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 23:07:31.500735   28167 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 23:07:31.500741   28167 kubeadm.go:310] 
	I0803 23:07:31.500788   28167 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 23:07:31.500797   28167 kubeadm.go:310] 
	I0803 23:07:31.500841   28167 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 23:07:31.500847   28167 kubeadm.go:310] 
	I0803 23:07:31.500914   28167 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 23:07:31.500987   28167 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 23:07:31.501050   28167 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 23:07:31.501059   28167 kubeadm.go:310] 
	I0803 23:07:31.501125   28167 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 23:07:31.501197   28167 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 23:07:31.501205   28167 kubeadm.go:310] 
	I0803 23:07:31.501276   28167 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y24y8s.6ynp5uqn81rz378h \
	I0803 23:07:31.501377   28167 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0803 23:07:31.501403   28167 kubeadm.go:310] 	--control-plane 
	I0803 23:07:31.501407   28167 kubeadm.go:310] 
	I0803 23:07:31.501475   28167 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 23:07:31.501481   28167 kubeadm.go:310] 
	I0803 23:07:31.501550   28167 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y24y8s.6ynp5uqn81rz378h \
	I0803 23:07:31.501643   28167 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0803 23:07:31.501653   28167 cni.go:84] Creating CNI manager for ""
	I0803 23:07:31.501658   28167 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0803 23:07:31.503192   28167 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0803 23:07:31.504428   28167 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0803 23:07:31.510517   28167 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0803 23:07:31.510535   28167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0803 23:07:31.528827   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0803 23:07:31.917829   28167 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 23:07:31.917902   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076508 minikube.k8s.io/updated_at=2024_08_03T23_07_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=ha-076508 minikube.k8s.io/primary=true
	I0803 23:07:31.917908   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:31.956804   28167 ops.go:34] apiserver oom_adj: -16
	I0803 23:07:32.096120   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:32.597167   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:33.096957   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:33.596999   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:34.096832   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:34.596471   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:35.097004   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:35.597125   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:36.096172   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:36.596310   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:37.097076   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:37.596611   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:38.096853   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:38.596166   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:39.097183   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:39.596275   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:40.096694   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:40.596509   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:41.096307   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:41.597095   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:42.096232   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:42.596401   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:43.096910   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:43.596980   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:43.747735   28167 kubeadm.go:1113] duration metric: took 11.829901645s to wait for elevateKubeSystemPrivileges
	I0803 23:07:43.747775   28167 kubeadm.go:394] duration metric: took 23.644887361s to StartCluster
	I0803 23:07:43.747795   28167 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:43.747878   28167 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:07:43.748494   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:43.748706   28167 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:07:43.748720   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0803 23:07:43.748727   28167 start.go:241] waiting for startup goroutines ...
	I0803 23:07:43.748734   28167 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 23:07:43.748816   28167 addons.go:69] Setting storage-provisioner=true in profile "ha-076508"
	I0803 23:07:43.748819   28167 addons.go:69] Setting default-storageclass=true in profile "ha-076508"
	I0803 23:07:43.748840   28167 addons.go:234] Setting addon storage-provisioner=true in "ha-076508"
	I0803 23:07:43.748847   28167 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-076508"
	I0803 23:07:43.748869   28167 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:07:43.748966   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:07:43.749277   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:07:43.749314   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:07:43.749279   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:07:43.749409   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:07:43.764934   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0803 23:07:43.765472   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:07:43.766055   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:07:43.766091   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:07:43.766400   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:07:43.766591   28167 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:07:43.767941   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39849
	I0803 23:07:43.768354   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:07:43.768847   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:07:43.768874   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:07:43.769002   28167 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:07:43.769192   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:07:43.769328   28167 kapi.go:59] client config for ha-076508: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key", CAFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 23:07:43.769700   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:07:43.769726   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:07:43.769842   28167 cert_rotation.go:137] Starting client certificate rotation controller
	I0803 23:07:43.770048   28167 addons.go:234] Setting addon default-storageclass=true in "ha-076508"
	I0803 23:07:43.770093   28167 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:07:43.770418   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:07:43.770447   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:07:43.785471   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44537
	I0803 23:07:43.785702   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0803 23:07:43.785978   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:07:43.786083   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:07:43.786560   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:07:43.786570   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:07:43.786588   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:07:43.786591   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:07:43.786932   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:07:43.786936   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:07:43.787181   28167 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:07:43.787518   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:07:43.787564   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:07:43.789384   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:43.791206   28167 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:07:43.792308   28167 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:07:43.792326   28167 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 23:07:43.792342   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:43.795542   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:43.796005   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:43.796037   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:43.796203   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:43.796383   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:43.796573   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:43.796741   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:07:43.802713   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I0803 23:07:43.803152   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:07:43.803606   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:07:43.803625   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:07:43.803890   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:07:43.804068   28167 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:07:43.805796   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:43.806001   28167 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 23:07:43.806016   28167 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 23:07:43.806033   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:43.808977   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:43.809430   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:43.809458   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:43.809588   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:43.809776   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:43.809939   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:43.810080   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:07:43.910095   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0803 23:07:43.959987   28167 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:07:43.969087   28167 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:07:44.334727   28167 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0803 23:07:44.656780   28167 main.go:141] libmachine: Making call to close driver server
	I0803 23:07:44.656802   28167 main.go:141] libmachine: (ha-076508) Calling .Close
	I0803 23:07:44.656861   28167 main.go:141] libmachine: Making call to close driver server
	I0803 23:07:44.656888   28167 main.go:141] libmachine: (ha-076508) Calling .Close
	I0803 23:07:44.657149   28167 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:07:44.657166   28167 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:07:44.657174   28167 main.go:141] libmachine: Making call to close driver server
	I0803 23:07:44.657181   28167 main.go:141] libmachine: (ha-076508) Calling .Close
	I0803 23:07:44.657194   28167 main.go:141] libmachine: (ha-076508) DBG | Closing plugin on server side
	I0803 23:07:44.657228   28167 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:07:44.657238   28167 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:07:44.657246   28167 main.go:141] libmachine: Making call to close driver server
	I0803 23:07:44.657254   28167 main.go:141] libmachine: (ha-076508) Calling .Close
	I0803 23:07:44.657385   28167 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:07:44.657406   28167 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:07:44.657511   28167 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0803 23:07:44.657523   28167 round_trippers.go:469] Request Headers:
	I0803 23:07:44.657533   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:07:44.657536   28167 main.go:141] libmachine: (ha-076508) DBG | Closing plugin on server side
	I0803 23:07:44.657540   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:07:44.657509   28167 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:07:44.657647   28167 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:07:44.676707   28167 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0803 23:07:44.677309   28167 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0803 23:07:44.677325   28167 round_trippers.go:469] Request Headers:
	I0803 23:07:44.677333   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:07:44.677337   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:07:44.677341   28167 round_trippers.go:473]     Content-Type: application/json
	I0803 23:07:44.688684   28167 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0803 23:07:44.688995   28167 main.go:141] libmachine: Making call to close driver server
	I0803 23:07:44.689011   28167 main.go:141] libmachine: (ha-076508) Calling .Close
	I0803 23:07:44.689303   28167 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:07:44.689326   28167 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:07:44.689331   28167 main.go:141] libmachine: (ha-076508) DBG | Closing plugin on server side
	I0803 23:07:44.691094   28167 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0803 23:07:44.692810   28167 addons.go:510] duration metric: took 944.073124ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0803 23:07:44.692841   28167 start.go:246] waiting for cluster config update ...
	I0803 23:07:44.692852   28167 start.go:255] writing updated cluster config ...
	I0803 23:07:44.694555   28167 out.go:177] 
	I0803 23:07:44.696127   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:07:44.696200   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:07:44.697954   28167 out.go:177] * Starting "ha-076508-m02" control-plane node in "ha-076508" cluster
	I0803 23:07:44.699690   28167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:07:44.699717   28167 cache.go:56] Caching tarball of preloaded images
	I0803 23:07:44.699806   28167 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:07:44.699819   28167 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:07:44.699882   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:07:44.700198   28167 start.go:360] acquireMachinesLock for ha-076508-m02: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:07:44.700243   28167 start.go:364] duration metric: took 25.065µs to acquireMachinesLock for "ha-076508-m02"
	I0803 23:07:44.700260   28167 start.go:93] Provisioning new machine with config: &{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:07:44.700324   28167 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0803 23:07:44.702052   28167 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:07:44.702152   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:07:44.702180   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:07:44.717054   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43381
	I0803 23:07:44.717495   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:07:44.717969   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:07:44.717991   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:07:44.718330   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:07:44.718556   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetMachineName
	I0803 23:07:44.718737   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:07:44.718937   28167 start.go:159] libmachine.API.Create for "ha-076508" (driver="kvm2")
	I0803 23:07:44.718961   28167 client.go:168] LocalClient.Create starting
	I0803 23:07:44.718999   28167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0803 23:07:44.719045   28167 main.go:141] libmachine: Decoding PEM data...
	I0803 23:07:44.719065   28167 main.go:141] libmachine: Parsing certificate...
	I0803 23:07:44.719147   28167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0803 23:07:44.719176   28167 main.go:141] libmachine: Decoding PEM data...
	I0803 23:07:44.719192   28167 main.go:141] libmachine: Parsing certificate...
	I0803 23:07:44.719212   28167 main.go:141] libmachine: Running pre-create checks...
	I0803 23:07:44.719224   28167 main.go:141] libmachine: (ha-076508-m02) Calling .PreCreateCheck
	I0803 23:07:44.719420   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetConfigRaw
	I0803 23:07:44.720368   28167 main.go:141] libmachine: Creating machine...
	I0803 23:07:44.720385   28167 main.go:141] libmachine: (ha-076508-m02) Calling .Create
	I0803 23:07:44.720530   28167 main.go:141] libmachine: (ha-076508-m02) Creating KVM machine...
	I0803 23:07:44.721969   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found existing default KVM network
	I0803 23:07:44.722090   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found existing private KVM network mk-ha-076508
	I0803 23:07:44.722265   28167 main.go:141] libmachine: (ha-076508-m02) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02 ...
	I0803 23:07:44.722292   28167 main.go:141] libmachine: (ha-076508-m02) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:07:44.722344   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:44.722250   28565 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:07:44.722432   28167 main.go:141] libmachine: (ha-076508-m02) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:07:44.959458   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:44.959322   28565 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa...
	I0803 23:07:45.050295   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:45.050161   28565 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/ha-076508-m02.rawdisk...
	I0803 23:07:45.050328   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Writing magic tar header
	I0803 23:07:45.050343   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Writing SSH key tar header
	I0803 23:07:45.050356   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:45.050266   28565 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02 ...
	I0803 23:07:45.050372   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02
	I0803 23:07:45.050421   28167 main.go:141] libmachine: (ha-076508-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02 (perms=drwx------)
	I0803 23:07:45.050443   28167 main.go:141] libmachine: (ha-076508-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:07:45.050460   28167 main.go:141] libmachine: (ha-076508-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0803 23:07:45.050483   28167 main.go:141] libmachine: (ha-076508-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0803 23:07:45.050498   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0803 23:07:45.050510   28167 main.go:141] libmachine: (ha-076508-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:07:45.050525   28167 main.go:141] libmachine: (ha-076508-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:07:45.050536   28167 main.go:141] libmachine: (ha-076508-m02) Creating domain...
	I0803 23:07:45.050552   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:07:45.050571   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0803 23:07:45.050592   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:07:45.050603   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:07:45.050617   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home
	I0803 23:07:45.050628   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Skipping /home - not owner
	I0803 23:07:45.051486   28167 main.go:141] libmachine: (ha-076508-m02) define libvirt domain using xml: 
	I0803 23:07:45.051508   28167 main.go:141] libmachine: (ha-076508-m02) <domain type='kvm'>
	I0803 23:07:45.051535   28167 main.go:141] libmachine: (ha-076508-m02)   <name>ha-076508-m02</name>
	I0803 23:07:45.051556   28167 main.go:141] libmachine: (ha-076508-m02)   <memory unit='MiB'>2200</memory>
	I0803 23:07:45.051565   28167 main.go:141] libmachine: (ha-076508-m02)   <vcpu>2</vcpu>
	I0803 23:07:45.051571   28167 main.go:141] libmachine: (ha-076508-m02)   <features>
	I0803 23:07:45.051580   28167 main.go:141] libmachine: (ha-076508-m02)     <acpi/>
	I0803 23:07:45.051586   28167 main.go:141] libmachine: (ha-076508-m02)     <apic/>
	I0803 23:07:45.051596   28167 main.go:141] libmachine: (ha-076508-m02)     <pae/>
	I0803 23:07:45.051605   28167 main.go:141] libmachine: (ha-076508-m02)     
	I0803 23:07:45.051616   28167 main.go:141] libmachine: (ha-076508-m02)   </features>
	I0803 23:07:45.051626   28167 main.go:141] libmachine: (ha-076508-m02)   <cpu mode='host-passthrough'>
	I0803 23:07:45.051633   28167 main.go:141] libmachine: (ha-076508-m02)   
	I0803 23:07:45.051646   28167 main.go:141] libmachine: (ha-076508-m02)   </cpu>
	I0803 23:07:45.051674   28167 main.go:141] libmachine: (ha-076508-m02)   <os>
	I0803 23:07:45.051699   28167 main.go:141] libmachine: (ha-076508-m02)     <type>hvm</type>
	I0803 23:07:45.051716   28167 main.go:141] libmachine: (ha-076508-m02)     <boot dev='cdrom'/>
	I0803 23:07:45.051724   28167 main.go:141] libmachine: (ha-076508-m02)     <boot dev='hd'/>
	I0803 23:07:45.051742   28167 main.go:141] libmachine: (ha-076508-m02)     <bootmenu enable='no'/>
	I0803 23:07:45.051755   28167 main.go:141] libmachine: (ha-076508-m02)   </os>
	I0803 23:07:45.051764   28167 main.go:141] libmachine: (ha-076508-m02)   <devices>
	I0803 23:07:45.051774   28167 main.go:141] libmachine: (ha-076508-m02)     <disk type='file' device='cdrom'>
	I0803 23:07:45.051790   28167 main.go:141] libmachine: (ha-076508-m02)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/boot2docker.iso'/>
	I0803 23:07:45.051801   28167 main.go:141] libmachine: (ha-076508-m02)       <target dev='hdc' bus='scsi'/>
	I0803 23:07:45.051810   28167 main.go:141] libmachine: (ha-076508-m02)       <readonly/>
	I0803 23:07:45.051817   28167 main.go:141] libmachine: (ha-076508-m02)     </disk>
	I0803 23:07:45.051827   28167 main.go:141] libmachine: (ha-076508-m02)     <disk type='file' device='disk'>
	I0803 23:07:45.051839   28167 main.go:141] libmachine: (ha-076508-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:07:45.051868   28167 main.go:141] libmachine: (ha-076508-m02)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/ha-076508-m02.rawdisk'/>
	I0803 23:07:45.051878   28167 main.go:141] libmachine: (ha-076508-m02)       <target dev='hda' bus='virtio'/>
	I0803 23:07:45.051890   28167 main.go:141] libmachine: (ha-076508-m02)     </disk>
	I0803 23:07:45.051901   28167 main.go:141] libmachine: (ha-076508-m02)     <interface type='network'>
	I0803 23:07:45.051913   28167 main.go:141] libmachine: (ha-076508-m02)       <source network='mk-ha-076508'/>
	I0803 23:07:45.051920   28167 main.go:141] libmachine: (ha-076508-m02)       <model type='virtio'/>
	I0803 23:07:45.051930   28167 main.go:141] libmachine: (ha-076508-m02)     </interface>
	I0803 23:07:45.051941   28167 main.go:141] libmachine: (ha-076508-m02)     <interface type='network'>
	I0803 23:07:45.051951   28167 main.go:141] libmachine: (ha-076508-m02)       <source network='default'/>
	I0803 23:07:45.051962   28167 main.go:141] libmachine: (ha-076508-m02)       <model type='virtio'/>
	I0803 23:07:45.051972   28167 main.go:141] libmachine: (ha-076508-m02)     </interface>
	I0803 23:07:45.051979   28167 main.go:141] libmachine: (ha-076508-m02)     <serial type='pty'>
	I0803 23:07:45.051990   28167 main.go:141] libmachine: (ha-076508-m02)       <target port='0'/>
	I0803 23:07:45.051999   28167 main.go:141] libmachine: (ha-076508-m02)     </serial>
	I0803 23:07:45.052008   28167 main.go:141] libmachine: (ha-076508-m02)     <console type='pty'>
	I0803 23:07:45.052018   28167 main.go:141] libmachine: (ha-076508-m02)       <target type='serial' port='0'/>
	I0803 23:07:45.052029   28167 main.go:141] libmachine: (ha-076508-m02)     </console>
	I0803 23:07:45.052037   28167 main.go:141] libmachine: (ha-076508-m02)     <rng model='virtio'>
	I0803 23:07:45.052054   28167 main.go:141] libmachine: (ha-076508-m02)       <backend model='random'>/dev/random</backend>
	I0803 23:07:45.052064   28167 main.go:141] libmachine: (ha-076508-m02)     </rng>
	I0803 23:07:45.052071   28167 main.go:141] libmachine: (ha-076508-m02)     
	I0803 23:07:45.052081   28167 main.go:141] libmachine: (ha-076508-m02)     
	I0803 23:07:45.052093   28167 main.go:141] libmachine: (ha-076508-m02)   </devices>
	I0803 23:07:45.052112   28167 main.go:141] libmachine: (ha-076508-m02) </domain>
	I0803 23:07:45.052127   28167 main.go:141] libmachine: (ha-076508-m02) 
	I0803 23:07:45.058836   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:88:e9:4e in network default
	I0803 23:07:45.059428   28167 main.go:141] libmachine: (ha-076508-m02) Ensuring networks are active...
	I0803 23:07:45.059451   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:45.060133   28167 main.go:141] libmachine: (ha-076508-m02) Ensuring network default is active
	I0803 23:07:45.060527   28167 main.go:141] libmachine: (ha-076508-m02) Ensuring network mk-ha-076508 is active
	I0803 23:07:45.061091   28167 main.go:141] libmachine: (ha-076508-m02) Getting domain xml...
	I0803 23:07:45.061900   28167 main.go:141] libmachine: (ha-076508-m02) Creating domain...
	I0803 23:07:46.276718   28167 main.go:141] libmachine: (ha-076508-m02) Waiting to get IP...
	I0803 23:07:46.277542   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:46.278077   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:46.278117   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:46.278049   28565 retry.go:31] will retry after 262.095555ms: waiting for machine to come up
	I0803 23:07:46.541381   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:46.541763   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:46.541789   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:46.541712   28565 retry.go:31] will retry after 322.506254ms: waiting for machine to come up
	I0803 23:07:46.866323   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:46.866715   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:46.866743   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:46.866674   28565 retry.go:31] will retry after 306.839411ms: waiting for machine to come up
	I0803 23:07:47.175280   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:47.175727   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:47.175763   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:47.175683   28565 retry.go:31] will retry after 405.983973ms: waiting for machine to come up
	I0803 23:07:47.583154   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:47.583682   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:47.583730   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:47.583657   28565 retry.go:31] will retry after 521.558917ms: waiting for machine to come up
	I0803 23:07:48.106472   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:48.107190   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:48.107239   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:48.107172   28565 retry.go:31] will retry after 677.724945ms: waiting for machine to come up
	I0803 23:07:48.786099   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:48.786576   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:48.786603   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:48.786530   28565 retry.go:31] will retry after 1.054768836s: waiting for machine to come up
	I0803 23:07:49.843130   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:49.843542   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:49.843570   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:49.843501   28565 retry.go:31] will retry after 1.195620314s: waiting for machine to come up
	I0803 23:07:51.040530   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:51.040986   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:51.041015   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:51.040950   28565 retry.go:31] will retry after 1.178141721s: waiting for machine to come up
	I0803 23:07:52.220851   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:52.221283   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:52.221303   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:52.221240   28565 retry.go:31] will retry after 1.497880009s: waiting for machine to come up
	I0803 23:07:53.720867   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:53.721329   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:53.721347   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:53.721293   28565 retry.go:31] will retry after 1.77773676s: waiting for machine to come up
	I0803 23:07:55.500605   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:55.501010   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:55.501038   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:55.500959   28565 retry.go:31] will retry after 2.214448382s: waiting for machine to come up
	I0803 23:07:57.718319   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:57.718692   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:57.718714   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:57.718662   28565 retry.go:31] will retry after 3.914237089s: waiting for machine to come up
	I0803 23:08:01.634618   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:01.635117   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:08:01.635141   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:08:01.635088   28565 retry.go:31] will retry after 5.603783961s: waiting for machine to come up
	I0803 23:08:07.242373   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.242842   28167 main.go:141] libmachine: (ha-076508-m02) Found IP for machine: 192.168.39.245
	I0803 23:08:07.242864   28167 main.go:141] libmachine: (ha-076508-m02) Reserving static IP address...
	I0803 23:08:07.242875   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has current primary IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.243390   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find host DHCP lease matching {name: "ha-076508-m02", mac: "52:54:00:d6:c8:3b", ip: "192.168.39.245"} in network mk-ha-076508
	I0803 23:08:07.318237   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Getting to WaitForSSH function...
	I0803 23:08:07.318264   28167 main.go:141] libmachine: (ha-076508-m02) Reserved static IP address: 192.168.39.245
	I0803 23:08:07.318276   28167 main.go:141] libmachine: (ha-076508-m02) Waiting for SSH to be available...
	I0803 23:08:07.320887   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.321294   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.321335   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.321495   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Using SSH client type: external
	I0803 23:08:07.321520   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa (-rw-------)
	I0803 23:08:07.321580   28167 main.go:141] libmachine: (ha-076508-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:08:07.321597   28167 main.go:141] libmachine: (ha-076508-m02) DBG | About to run SSH command:
	I0803 23:08:07.321611   28167 main.go:141] libmachine: (ha-076508-m02) DBG | exit 0
	I0803 23:08:07.449726   28167 main.go:141] libmachine: (ha-076508-m02) DBG | SSH cmd err, output: <nil>: 
	I0803 23:08:07.450029   28167 main.go:141] libmachine: (ha-076508-m02) KVM machine creation complete!
	I0803 23:08:07.450332   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetConfigRaw
	I0803 23:08:07.450872   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:07.451077   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:07.451231   28167 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:08:07.451246   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:08:07.452553   28167 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:08:07.452582   28167 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:08:07.452591   28167 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:08:07.452602   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:07.456057   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.456440   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.456469   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.456619   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:07.456771   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.456945   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.457072   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:07.457217   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:08:07.457425   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0803 23:08:07.457437   28167 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:08:07.569167   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:08:07.569193   28167 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:08:07.569201   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:07.572136   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.572528   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.572564   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.572661   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:07.572865   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.573051   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.573166   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:07.573304   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:08:07.573486   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0803 23:08:07.573500   28167 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:08:07.686386   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:08:07.686465   28167 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:08:07.686475   28167 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:08:07.686483   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetMachineName
	I0803 23:08:07.686788   28167 buildroot.go:166] provisioning hostname "ha-076508-m02"
	I0803 23:08:07.686813   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetMachineName
	I0803 23:08:07.686996   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:07.689797   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.690234   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.690263   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.690392   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:07.690568   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.690732   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.690876   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:07.691015   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:08:07.691183   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0803 23:08:07.691194   28167 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076508-m02 && echo "ha-076508-m02" | sudo tee /etc/hostname
	I0803 23:08:07.821783   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076508-m02
	
	I0803 23:08:07.821812   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:07.824483   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.824819   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.824847   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.825031   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:07.825247   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.825426   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.825583   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:07.825742   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:08:07.825960   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0803 23:08:07.825985   28167 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076508-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076508-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076508-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:08:07.947012   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:08:07.947045   28167 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 23:08:07.947074   28167 buildroot.go:174] setting up certificates
	I0803 23:08:07.947094   28167 provision.go:84] configureAuth start
	I0803 23:08:07.947113   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetMachineName
	I0803 23:08:07.947425   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:08:07.950324   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.950751   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.950783   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.950933   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:07.953130   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.953512   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.953540   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.953681   28167 provision.go:143] copyHostCerts
	I0803 23:08:07.953715   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:08:07.953753   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0803 23:08:07.953762   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:08:07.953831   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 23:08:07.953906   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:08:07.953923   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0803 23:08:07.953930   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:08:07.953955   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 23:08:07.953996   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:08:07.954014   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0803 23:08:07.954020   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:08:07.954042   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 23:08:07.954094   28167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.ha-076508-m02 san=[127.0.0.1 192.168.39.245 ha-076508-m02 localhost minikube]
	I0803 23:08:08.317485   28167 provision.go:177] copyRemoteCerts
	I0803 23:08:08.317547   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:08:08.317575   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:08.320596   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.321034   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:08.321069   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.321246   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:08.321435   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:08.321635   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:08.321758   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	I0803 23:08:08.408235   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:08:08.408314   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 23:08:08.434966   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:08:08.435037   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0803 23:08:08.463764   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:08:08.463842   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0803 23:08:08.489067   28167 provision.go:87] duration metric: took 541.95512ms to configureAuth
	I0803 23:08:08.489096   28167 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:08:08.489277   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:08:08.489379   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:08.492019   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.492394   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:08.492424   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.492539   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:08.492704   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:08.492790   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:08.492891   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:08.493040   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:08:08.493192   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0803 23:08:08.493205   28167 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:08:08.775774   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:08:08.775827   28167 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:08:08.775839   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetURL
	I0803 23:08:08.777262   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Using libvirt version 6000000
	I0803 23:08:08.779496   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.779845   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:08.779872   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.780041   28167 main.go:141] libmachine: Docker is up and running!
	I0803 23:08:08.780059   28167 main.go:141] libmachine: Reticulating splines...
	I0803 23:08:08.780067   28167 client.go:171] duration metric: took 24.061098594s to LocalClient.Create
	I0803 23:08:08.780094   28167 start.go:167] duration metric: took 24.061158189s to libmachine.API.Create "ha-076508"
	I0803 23:08:08.780106   28167 start.go:293] postStartSetup for "ha-076508-m02" (driver="kvm2")
	I0803 23:08:08.780118   28167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:08:08.780149   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:08.780381   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:08:08.780402   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:08.782577   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.782870   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:08.782900   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.783049   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:08.783239   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:08.783399   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:08.783516   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	I0803 23:08:08.868965   28167 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:08:08.873427   28167 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:08:08.873452   28167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 23:08:08.873536   28167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 23:08:08.873636   28167 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0803 23:08:08.873650   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0803 23:08:08.873765   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:08:08.883854   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:08:08.910258   28167 start.go:296] duration metric: took 130.136737ms for postStartSetup
	I0803 23:08:08.910312   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetConfigRaw
	I0803 23:08:08.910868   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:08:08.913571   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.913868   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:08.913897   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.914128   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:08:08.914308   28167 start.go:128] duration metric: took 24.213972239s to createHost
	I0803 23:08:08.914329   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:08.916673   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.917132   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:08.917157   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.917315   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:08.917547   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:08.917684   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:08.917792   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:08.918110   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:08:08.918320   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0803 23:08:08.918335   28167 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:08:09.030445   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722726489.006665611
	
	I0803 23:08:09.030465   28167 fix.go:216] guest clock: 1722726489.006665611
	I0803 23:08:09.030473   28167 fix.go:229] Guest: 2024-08-03 23:08:09.006665611 +0000 UTC Remote: 2024-08-03 23:08:08.914318937 +0000 UTC m=+81.457259917 (delta=92.346674ms)
	I0803 23:08:09.030488   28167 fix.go:200] guest clock delta is within tolerance: 92.346674ms
	I0803 23:08:09.030493   28167 start.go:83] releasing machines lock for "ha-076508-m02", held for 24.330240912s
	I0803 23:08:09.030510   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:09.030890   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:08:09.033519   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:09.034038   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:09.034068   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:09.036570   28167 out.go:177] * Found network options:
	I0803 23:08:09.038141   28167 out.go:177]   - NO_PROXY=192.168.39.154
	W0803 23:08:09.039650   28167 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:08:09.039686   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:09.040356   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:09.040522   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:09.040590   28167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:08:09.040631   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	W0803 23:08:09.040711   28167 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:08:09.040784   28167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:08:09.040816   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:09.043490   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:09.043734   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:09.043905   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:09.043935   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:09.044087   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:09.044106   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:09.044121   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:09.044312   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:09.044325   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:09.044532   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:09.044534   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:09.044698   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:09.044739   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	I0803 23:08:09.044867   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	I0803 23:08:09.282944   28167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:08:09.289754   28167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:08:09.289860   28167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:08:09.306644   28167 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:08:09.306665   28167 start.go:495] detecting cgroup driver to use...
	I0803 23:08:09.306719   28167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:08:09.323473   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:08:09.338325   28167 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:08:09.338398   28167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:08:09.354671   28167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:08:09.371514   28167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:08:09.490414   28167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:08:09.633233   28167 docker.go:233] disabling docker service ...
	I0803 23:08:09.633307   28167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:08:09.649648   28167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:08:09.663216   28167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:08:09.798744   28167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:08:09.933183   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:08:09.948876   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:08:09.968963   28167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:08:09.969030   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:09.980877   28167 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:08:09.980937   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:09.992527   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:10.003373   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:10.013679   28167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:08:10.024067   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:10.034653   28167 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:10.053928   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:10.066025   28167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:08:10.076716   28167 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:08:10.076785   28167 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:08:10.091227   28167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:08:10.101389   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:08:10.219495   28167 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:08:10.364061   28167 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:08:10.364144   28167 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:08:10.370226   28167 start.go:563] Will wait 60s for crictl version
	I0803 23:08:10.370294   28167 ssh_runner.go:195] Run: which crictl
	I0803 23:08:10.374289   28167 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:08:10.418729   28167 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:08:10.418821   28167 ssh_runner.go:195] Run: crio --version
	I0803 23:08:10.448365   28167 ssh_runner.go:195] Run: crio --version
	I0803 23:08:10.480036   28167 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:08:10.481727   28167 out.go:177]   - env NO_PROXY=192.168.39.154
	I0803 23:08:10.483057   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:08:10.486017   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:10.486299   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:10.486319   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:10.486557   28167 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:08:10.490779   28167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:08:10.503703   28167 mustload.go:65] Loading cluster: ha-076508
	I0803 23:08:10.503952   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:08:10.504210   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:08:10.504235   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:08:10.518805   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39449
	I0803 23:08:10.519287   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:08:10.519717   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:08:10.519738   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:08:10.520123   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:08:10.520329   28167 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:08:10.522069   28167 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:08:10.522343   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:08:10.522370   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:08:10.537123   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I0803 23:08:10.537555   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:08:10.537989   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:08:10.538007   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:08:10.538315   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:08:10.538493   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:08:10.538716   28167 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508 for IP: 192.168.39.245
	I0803 23:08:10.538728   28167 certs.go:194] generating shared ca certs ...
	I0803 23:08:10.538742   28167 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:08:10.538878   28167 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 23:08:10.538934   28167 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 23:08:10.538947   28167 certs.go:256] generating profile certs ...
	I0803 23:08:10.539044   28167 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key
	I0803 23:08:10.539081   28167 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.7ef0ec72
	I0803 23:08:10.539103   28167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.7ef0ec72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.154 192.168.39.245 192.168.39.254]
	I0803 23:08:10.607588   28167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.7ef0ec72 ...
	I0803 23:08:10.607617   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.7ef0ec72: {Name:mk5470fdf54109f9a0315f27866a337c16f70579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:08:10.607797   28167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.7ef0ec72 ...
	I0803 23:08:10.607819   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.7ef0ec72: {Name:mkb13d3af1c57c46674af59886c41467b9704ffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:08:10.607915   28167 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.7ef0ec72 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt
	I0803 23:08:10.608041   28167 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.7ef0ec72 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key
	I0803 23:08:10.608163   28167 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key
	I0803 23:08:10.608177   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:08:10.608190   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:08:10.608201   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:08:10.608211   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:08:10.608220   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:08:10.608231   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:08:10.608241   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:08:10.608253   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:08:10.608299   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0803 23:08:10.608361   28167 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0803 23:08:10.608373   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 23:08:10.608398   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 23:08:10.608421   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:08:10.608442   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 23:08:10.608475   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:08:10.608498   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:08:10.608513   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0803 23:08:10.608525   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0803 23:08:10.608555   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:08:10.611493   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:08:10.611839   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:08:10.611865   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:08:10.612030   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:08:10.612229   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:08:10.612424   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:08:10.612561   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:08:10.697777   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0803 23:08:10.703743   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0803 23:08:10.717864   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0803 23:08:10.722732   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0803 23:08:10.733619   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0803 23:08:10.738226   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0803 23:08:10.751783   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0803 23:08:10.756588   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0803 23:08:10.770950   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0803 23:08:10.775965   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0803 23:08:10.788447   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0803 23:08:10.793802   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0803 23:08:10.809142   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:08:10.835911   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:08:10.862796   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:08:10.892760   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 23:08:10.920294   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0803 23:08:10.945621   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 23:08:10.971720   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:08:10.997989   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:08:11.022543   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:08:11.048466   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0803 23:08:11.074385   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0803 23:08:11.098355   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0803 23:08:11.116658   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0803 23:08:11.133799   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0803 23:08:11.150804   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0803 23:08:11.169616   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0803 23:08:11.186653   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0803 23:08:11.204123   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0803 23:08:11.221204   28167 ssh_runner.go:195] Run: openssl version
	I0803 23:08:11.227629   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:08:11.239234   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:08:11.243933   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:08:11.243986   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:08:11.250720   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:08:11.262577   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0803 23:08:11.275722   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0803 23:08:11.280617   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:08:11.280683   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0803 23:08:11.286720   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0803 23:08:11.299605   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0803 23:08:11.312849   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0803 23:08:11.317867   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:08:11.317911   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0803 23:08:11.323881   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:08:11.335450   28167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:08:11.340056   28167 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:08:11.340113   28167 kubeadm.go:934] updating node {m02 192.168.39.245 8443 v1.30.3 crio true true} ...
	I0803 23:08:11.340191   28167 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076508-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:08:11.340217   28167 kube-vip.go:115] generating kube-vip config ...
	I0803 23:08:11.340258   28167 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:08:11.362090   28167 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:08:11.362155   28167 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:08:11.362223   28167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:08:11.375039   28167 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0803 23:08:11.375130   28167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0803 23:08:11.387416   28167 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0803 23:08:11.387444   28167 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0803 23:08:11.387471   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:08:11.387532   28167 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0803 23:08:11.387591   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:08:11.392212   28167 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0803 23:08:11.392238   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0803 23:08:12.659618   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:08:12.659709   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:08:12.665019   28167 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0803 23:08:12.665074   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0803 23:09:21.509046   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:09:21.526148   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:09:21.526234   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:09:21.530687   28167 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0803 23:09:21.530722   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0803 23:09:21.944143   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0803 23:09:21.953877   28167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0803 23:09:21.970785   28167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:09:21.988574   28167 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0803 23:09:22.006385   28167 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:09:22.010604   28167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:09:22.022805   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:09:22.146860   28167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:09:22.163978   28167 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:09:22.164299   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:09:22.164333   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:09:22.179458   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39883
	I0803 23:09:22.179882   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:09:22.180310   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:09:22.180336   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:09:22.180630   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:09:22.180826   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:09:22.180979   28167 start.go:317] joinCluster: &{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:09:22.181096   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0803 23:09:22.181112   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:09:22.183955   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:09:22.184367   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:09:22.184398   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:09:22.184513   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:09:22.184682   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:09:22.184818   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:09:22.184940   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:09:22.356179   28167 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:09:22.356218   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1433o5.s4u1fkuqzly79dfp --discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076508-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443"
	I0803 23:09:43.804396   28167 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1433o5.s4u1fkuqzly79dfp --discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076508-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443": (21.448151925s)
	I0803 23:09:43.804432   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0803 23:09:44.423119   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076508-m02 minikube.k8s.io/updated_at=2024_08_03T23_09_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=ha-076508 minikube.k8s.io/primary=false
	I0803 23:09:44.562392   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-076508-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0803 23:09:44.678628   28167 start.go:319] duration metric: took 22.497645294s to joinCluster
	I0803 23:09:44.678700   28167 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:09:44.679030   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:09:44.680593   28167 out.go:177] * Verifying Kubernetes components...
	I0803 23:09:44.682197   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:09:44.987445   28167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:09:45.056753   28167 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:09:45.056960   28167 kapi.go:59] client config for ha-076508: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key", CAFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0803 23:09:45.057011   28167 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.154:8443
	I0803 23:09:45.057196   28167 node_ready.go:35] waiting up to 6m0s for node "ha-076508-m02" to be "Ready" ...
	I0803 23:09:45.057288   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:45.057296   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:45.057303   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:45.057309   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:45.068701   28167 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0803 23:09:45.557385   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:45.557406   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:45.557414   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:45.557418   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:45.561991   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:46.058061   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:46.058087   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:46.058098   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:46.058106   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:46.064936   28167 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:09:46.558430   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:46.558457   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:46.558465   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:46.558468   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:46.562544   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:47.057830   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:47.057852   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:47.057860   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:47.057866   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:47.062460   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:47.063462   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:09:47.558007   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:47.558036   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:47.558049   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:47.558054   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:47.561647   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:48.058234   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:48.058254   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:48.058265   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:48.058271   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:48.061711   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:48.557523   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:48.557546   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:48.557557   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:48.557561   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:48.562001   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:49.057788   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:49.057821   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:49.057835   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:49.057840   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:49.061291   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:49.558036   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:49.558056   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:49.558065   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:49.558068   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:49.562374   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:49.562984   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:09:50.057836   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:50.057860   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:50.057869   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:50.057874   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:50.061458   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:50.558217   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:50.558239   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:50.558247   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:50.558251   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:50.562002   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:51.058208   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:51.058234   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:51.058244   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:51.058251   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:51.061560   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:51.557746   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:51.557770   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:51.557782   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:51.557787   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:51.562369   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:52.057915   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:52.057932   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:52.057940   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:52.057945   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:52.060865   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:09:52.061677   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:09:52.557598   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:52.557628   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:52.557643   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:52.557649   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:52.562479   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:53.058196   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:53.058224   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:53.058236   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:53.058243   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:53.067540   28167 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0803 23:09:53.557515   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:53.557533   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:53.557541   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:53.557546   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:53.560896   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:54.058431   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:54.058454   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:54.058463   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:54.058468   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:54.061946   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:54.062505   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:09:54.558011   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:54.558037   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:54.558049   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:54.558057   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:54.562411   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:55.057939   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:55.057962   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:55.057972   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:55.057983   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:55.060619   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:09:55.558029   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:55.558052   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:55.558064   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:55.558071   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:55.562338   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:56.058362   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:56.058383   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:56.058394   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:56.058401   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:56.062018   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:56.557862   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:56.557891   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:56.557899   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:56.557903   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:56.561601   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:56.562123   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:09:57.057472   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:57.057492   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:57.057500   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:57.057505   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:57.061083   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:57.557892   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:57.557915   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:57.557924   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:57.557928   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:57.561449   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:58.058028   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:58.058056   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:58.058069   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:58.058074   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:58.061875   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:58.558022   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:58.558044   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:58.558052   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:58.558056   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:58.561975   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:58.562554   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:09:59.057981   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:59.058004   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:59.058015   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:59.058021   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:59.063360   28167 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:09:59.557496   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:59.557520   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:59.557530   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:59.557536   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:59.560887   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:00.058260   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:00.058289   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:00.058299   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:00.058303   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:00.062911   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:00.558439   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:00.558465   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:00.558475   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:00.558480   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:00.563951   28167 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:10:00.564494   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:10:01.057823   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:01.057847   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:01.057858   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:01.057863   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:01.061961   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:01.557524   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:01.557547   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:01.557557   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:01.557564   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:01.560766   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:02.057852   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:02.057880   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:02.057892   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:02.057897   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:02.061338   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:02.558420   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:02.558441   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:02.558451   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:02.558457   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:02.562393   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:03.058015   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:03.058041   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.058050   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.058054   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.061815   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:03.062300   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:10:03.557672   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:03.557695   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.557703   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.557708   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.561161   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:03.562034   28167 node_ready.go:49] node "ha-076508-m02" has status "Ready":"True"
	I0803 23:10:03.562054   28167 node_ready.go:38] duration metric: took 18.504824963s for node "ha-076508-m02" to be "Ready" ...
	I0803 23:10:03.562070   28167 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:10:03.562135   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:10:03.562144   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.562151   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.562155   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.567869   28167 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:10:03.574443   28167 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g4nns" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.574536   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-g4nns
	I0803 23:10:03.574555   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.574567   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.574577   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.577857   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:03.578621   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:03.578637   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.578645   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.578650   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.581732   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:03.582382   28167 pod_ready.go:92] pod "coredns-7db6d8ff4d-g4nns" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:03.582398   28167 pod_ready.go:81] duration metric: took 7.929465ms for pod "coredns-7db6d8ff4d-g4nns" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.582407   28167 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm52b" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.582456   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jm52b
	I0803 23:10:03.582463   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.582470   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.582475   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.585043   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:10:03.585739   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:03.585754   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.585762   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.585767   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.587887   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:10:03.588453   28167 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm52b" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:03.588474   28167 pod_ready.go:81] duration metric: took 6.06048ms for pod "coredns-7db6d8ff4d-jm52b" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.588485   28167 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.588549   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508
	I0803 23:10:03.588559   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.588569   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.588576   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.590791   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:10:03.591475   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:03.591492   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.591504   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.591510   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.593701   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:10:03.594240   28167 pod_ready.go:92] pod "etcd-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:03.594262   28167 pod_ready.go:81] duration metric: took 5.764629ms for pod "etcd-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.594273   28167 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.594321   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508-m02
	I0803 23:10:03.594328   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.594335   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.594339   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.598557   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:03.599422   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:03.599434   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.599441   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.599450   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.601740   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:10:03.604216   28167 pod_ready.go:92] pod "etcd-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:03.604234   28167 pod_ready.go:81] duration metric: took 9.953932ms for pod "etcd-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.604253   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.757799   28167 request.go:629] Waited for 153.482159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508
	I0803 23:10:03.757862   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508
	I0803 23:10:03.757867   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.757875   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.757879   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.761560   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:03.958391   28167 request.go:629] Waited for 196.043448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:03.958441   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:03.958446   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.958454   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.958458   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.962496   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:03.963132   28167 pod_ready.go:92] pod "kube-apiserver-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:03.963151   28167 pod_ready.go:81] duration metric: took 358.889806ms for pod "kube-apiserver-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.963165   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:04.158373   28167 request.go:629] Waited for 195.12224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508-m02
	I0803 23:10:04.158439   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508-m02
	I0803 23:10:04.158445   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:04.158456   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:04.158461   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:04.161999   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:04.358081   28167 request.go:629] Waited for 195.407692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:04.358137   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:04.358142   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:04.358150   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:04.358154   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:04.361889   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:04.362665   28167 pod_ready.go:92] pod "kube-apiserver-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:04.362686   28167 pod_ready.go:81] duration metric: took 399.512992ms for pod "kube-apiserver-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:04.362696   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:04.557711   28167 request.go:629] Waited for 194.942202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508
	I0803 23:10:04.557780   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508
	I0803 23:10:04.557786   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:04.557795   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:04.557802   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:04.567075   28167 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0803 23:10:04.758119   28167 request.go:629] Waited for 190.367024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:04.758196   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:04.758202   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:04.758211   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:04.758215   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:04.762702   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:04.763905   28167 pod_ready.go:92] pod "kube-controller-manager-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:04.763925   28167 pod_ready.go:81] duration metric: took 401.222332ms for pod "kube-controller-manager-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:04.763938   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:04.958067   28167 request.go:629] Waited for 194.05371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508-m02
	I0803 23:10:04.958157   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508-m02
	I0803 23:10:04.958165   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:04.958180   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:04.958190   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:04.961483   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:05.158538   28167 request.go:629] Waited for 196.325518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:05.158588   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:05.158593   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:05.158602   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:05.158605   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:05.161325   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:10:05.161730   28167 pod_ready.go:92] pod "kube-controller-manager-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:05.161749   28167 pod_ready.go:81] duration metric: took 397.803013ms for pod "kube-controller-manager-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:05.161761   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hkfgl" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:05.358005   28167 request.go:629] Waited for 196.170756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkfgl
	I0803 23:10:05.358086   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkfgl
	I0803 23:10:05.358095   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:05.358102   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:05.358112   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:05.362136   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:05.558654   28167 request.go:629] Waited for 195.840812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:05.558704   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:05.558709   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:05.558717   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:05.558723   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:05.562855   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:05.563314   28167 pod_ready.go:92] pod "kube-proxy-hkfgl" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:05.563331   28167 pod_ready.go:81] duration metric: took 401.562684ms for pod "kube-proxy-hkfgl" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:05.563343   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jvj96" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:05.758434   28167 request.go:629] Waited for 195.023596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jvj96
	I0803 23:10:05.758521   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jvj96
	I0803 23:10:05.758537   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:05.758548   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:05.758557   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:05.762220   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:05.958165   28167 request.go:629] Waited for 195.399403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:05.958223   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:05.958228   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:05.958236   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:05.958241   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:05.962239   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:05.963167   28167 pod_ready.go:92] pod "kube-proxy-jvj96" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:05.963185   28167 pod_ready.go:81] duration metric: took 399.834576ms for pod "kube-proxy-jvj96" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:05.963194   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:06.158316   28167 request.go:629] Waited for 195.044863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508
	I0803 23:10:06.158376   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508
	I0803 23:10:06.158381   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:06.158389   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:06.158394   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:06.170042   28167 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0803 23:10:06.357901   28167 request.go:629] Waited for 187.300794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:06.357960   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:06.357965   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:06.357972   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:06.357976   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:06.361223   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:06.361766   28167 pod_ready.go:92] pod "kube-scheduler-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:06.361788   28167 pod_ready.go:81] duration metric: took 398.588522ms for pod "kube-scheduler-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:06.361798   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:06.557911   28167 request.go:629] Waited for 196.032404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508-m02
	I0803 23:10:06.557969   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508-m02
	I0803 23:10:06.557975   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:06.557983   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:06.557991   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:06.561105   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:06.758066   28167 request.go:629] Waited for 196.362667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:06.758138   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:06.758143   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:06.758152   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:06.758157   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:06.762072   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:06.762508   28167 pod_ready.go:92] pod "kube-scheduler-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:06.762526   28167 pod_ready.go:81] duration metric: took 400.722781ms for pod "kube-scheduler-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:06.762536   28167 pod_ready.go:38] duration metric: took 3.200448227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:10:06.762563   28167 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:10:06.762634   28167 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:10:06.784072   28167 api_server.go:72] duration metric: took 22.105332742s to wait for apiserver process to appear ...
	I0803 23:10:06.784107   28167 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:10:06.784132   28167 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I0803 23:10:06.788410   28167 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
	ok
	I0803 23:10:06.788476   28167 round_trippers.go:463] GET https://192.168.39.154:8443/version
	I0803 23:10:06.788484   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:06.788492   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:06.788495   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:06.789307   28167 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0803 23:10:06.789427   28167 api_server.go:141] control plane version: v1.30.3
	I0803 23:10:06.789445   28167 api_server.go:131] duration metric: took 5.331655ms to wait for apiserver health ...
	I0803 23:10:06.789454   28167 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 23:10:06.957795   28167 request.go:629] Waited for 168.278061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:10:06.957878   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:10:06.957884   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:06.957891   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:06.957895   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:06.965395   28167 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0803 23:10:06.969834   28167 system_pods.go:59] 17 kube-system pods found
	I0803 23:10:06.969868   28167 system_pods.go:61] "coredns-7db6d8ff4d-g4nns" [1c9c7190-c993-4b51-8ba6-62e3ab513836] Running
	I0803 23:10:06.969874   28167 system_pods.go:61] "coredns-7db6d8ff4d-jm52b" [65abad67-6b05-4dbb-8d33-723306bee46f] Running
	I0803 23:10:06.969878   28167 system_pods.go:61] "etcd-ha-076508" [0d38d9a9-4f0f-4928-bd37-010dc1b7623e] Running
	I0803 23:10:06.969882   28167 system_pods.go:61] "etcd-ha-076508-m02" [b473f99f-7b7c-42a2-affc-69b5305ae2e2] Running
	I0803 23:10:06.969885   28167 system_pods.go:61] "kindnet-bpdht" [156017b0-941c-4b32-a73c-4798d48e5434] Running
	I0803 23:10:06.969888   28167 system_pods.go:61] "kindnet-kw254" [fd80828b-1c0f-4a0d-a5d0-f25501e65fd9] Running
	I0803 23:10:06.969892   28167 system_pods.go:61] "kube-apiserver-ha-076508" [975ea5b3-4598-438a-99c6-8c2b644a714b] Running
	I0803 23:10:06.969895   28167 system_pods.go:61] "kube-apiserver-ha-076508-m02" [fdaa8b75-c8a4-444c-9288-6aaec5b31834] Running
	I0803 23:10:06.969898   28167 system_pods.go:61] "kube-controller-manager-ha-076508" [3517b4d5-b6b3-4d39-9f4a-1b8c0ceae246] Running
	I0803 23:10:06.969901   28167 system_pods.go:61] "kube-controller-manager-ha-076508-m02" [f13130bb-619b-475f-ab9d-61422ca1a08b] Running
	I0803 23:10:06.969903   28167 system_pods.go:61] "kube-proxy-hkfgl" [31dca27d-663b-4bfa-8921-547686985835] Running
	I0803 23:10:06.969906   28167 system_pods.go:61] "kube-proxy-jvj96" [cdb6273b-31a8-48bc-8c5a-010363fc2a96] Running
	I0803 23:10:06.969909   28167 system_pods.go:61] "kube-scheduler-ha-076508" [63e9b52f-c7e8-4812-a666-284b2d383067] Running
	I0803 23:10:06.969911   28167 system_pods.go:61] "kube-scheduler-ha-076508-m02" [47cb368b-42e7-44f0-b1b1-40521064569b] Running
	I0803 23:10:06.969914   28167 system_pods.go:61] "kube-vip-ha-076508" [f0640d14-d8df-4fe5-8265-4f1215c2e881] Running
	I0803 23:10:06.969917   28167 system_pods.go:61] "kube-vip-ha-076508-m02" [0e1a3c8d-c1d4-4c29-b674-f13a62d2471c] Running
	I0803 23:10:06.969919   28167 system_pods.go:61] "storage-provisioner" [c98f9062-eff5-48e1-b260-7e8acf8df124] Running
	I0803 23:10:06.969925   28167 system_pods.go:74] duration metric: took 180.464708ms to wait for pod list to return data ...
	I0803 23:10:06.969933   28167 default_sa.go:34] waiting for default service account to be created ...
	I0803 23:10:07.158393   28167 request.go:629] Waited for 188.390565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:10:07.158466   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:10:07.158476   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:07.158487   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:07.158496   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:07.163254   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:07.163599   28167 default_sa.go:45] found service account: "default"
	I0803 23:10:07.163624   28167 default_sa.go:55] duration metric: took 193.683724ms for default service account to be created ...
	I0803 23:10:07.163635   28167 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 23:10:07.358108   28167 request.go:629] Waited for 194.40227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:10:07.358184   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:10:07.358190   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:07.358197   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:07.358201   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:07.364112   28167 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:10:07.369572   28167 system_pods.go:86] 17 kube-system pods found
	I0803 23:10:07.369606   28167 system_pods.go:89] "coredns-7db6d8ff4d-g4nns" [1c9c7190-c993-4b51-8ba6-62e3ab513836] Running
	I0803 23:10:07.369618   28167 system_pods.go:89] "coredns-7db6d8ff4d-jm52b" [65abad67-6b05-4dbb-8d33-723306bee46f] Running
	I0803 23:10:07.369625   28167 system_pods.go:89] "etcd-ha-076508" [0d38d9a9-4f0f-4928-bd37-010dc1b7623e] Running
	I0803 23:10:07.369630   28167 system_pods.go:89] "etcd-ha-076508-m02" [b473f99f-7b7c-42a2-affc-69b5305ae2e2] Running
	I0803 23:10:07.369636   28167 system_pods.go:89] "kindnet-bpdht" [156017b0-941c-4b32-a73c-4798d48e5434] Running
	I0803 23:10:07.369641   28167 system_pods.go:89] "kindnet-kw254" [fd80828b-1c0f-4a0d-a5d0-f25501e65fd9] Running
	I0803 23:10:07.369648   28167 system_pods.go:89] "kube-apiserver-ha-076508" [975ea5b3-4598-438a-99c6-8c2b644a714b] Running
	I0803 23:10:07.369654   28167 system_pods.go:89] "kube-apiserver-ha-076508-m02" [fdaa8b75-c8a4-444c-9288-6aaec5b31834] Running
	I0803 23:10:07.369661   28167 system_pods.go:89] "kube-controller-manager-ha-076508" [3517b4d5-b6b3-4d39-9f4a-1b8c0ceae246] Running
	I0803 23:10:07.369668   28167 system_pods.go:89] "kube-controller-manager-ha-076508-m02" [f13130bb-619b-475f-ab9d-61422ca1a08b] Running
	I0803 23:10:07.369679   28167 system_pods.go:89] "kube-proxy-hkfgl" [31dca27d-663b-4bfa-8921-547686985835] Running
	I0803 23:10:07.369689   28167 system_pods.go:89] "kube-proxy-jvj96" [cdb6273b-31a8-48bc-8c5a-010363fc2a96] Running
	I0803 23:10:07.369699   28167 system_pods.go:89] "kube-scheduler-ha-076508" [63e9b52f-c7e8-4812-a666-284b2d383067] Running
	I0803 23:10:07.369707   28167 system_pods.go:89] "kube-scheduler-ha-076508-m02" [47cb368b-42e7-44f0-b1b1-40521064569b] Running
	I0803 23:10:07.369716   28167 system_pods.go:89] "kube-vip-ha-076508" [f0640d14-d8df-4fe5-8265-4f1215c2e881] Running
	I0803 23:10:07.369722   28167 system_pods.go:89] "kube-vip-ha-076508-m02" [0e1a3c8d-c1d4-4c29-b674-f13a62d2471c] Running
	I0803 23:10:07.369731   28167 system_pods.go:89] "storage-provisioner" [c98f9062-eff5-48e1-b260-7e8acf8df124] Running
	I0803 23:10:07.369740   28167 system_pods.go:126] duration metric: took 206.098508ms to wait for k8s-apps to be running ...
	I0803 23:10:07.369761   28167 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 23:10:07.369818   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:10:07.392122   28167 system_svc.go:56] duration metric: took 22.355063ms WaitForService to wait for kubelet
	I0803 23:10:07.392147   28167 kubeadm.go:582] duration metric: took 22.713413593s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:10:07.392173   28167 node_conditions.go:102] verifying NodePressure condition ...
	I0803 23:10:07.557747   28167 request.go:629] Waited for 165.500392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes
	I0803 23:10:07.557798   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes
	I0803 23:10:07.557805   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:07.557816   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:07.557825   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:07.561883   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:07.562634   28167 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:10:07.562659   28167 node_conditions.go:123] node cpu capacity is 2
	I0803 23:10:07.562679   28167 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:10:07.562682   28167 node_conditions.go:123] node cpu capacity is 2
	I0803 23:10:07.562686   28167 node_conditions.go:105] duration metric: took 170.50921ms to run NodePressure ...
	I0803 23:10:07.562701   28167 start.go:241] waiting for startup goroutines ...
	I0803 23:10:07.562730   28167 start.go:255] writing updated cluster config ...
	I0803 23:10:07.564884   28167 out.go:177] 
	I0803 23:10:07.566769   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:10:07.566929   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:10:07.569128   28167 out.go:177] * Starting "ha-076508-m03" control-plane node in "ha-076508" cluster
	I0803 23:10:07.570611   28167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:10:07.570649   28167 cache.go:56] Caching tarball of preloaded images
	I0803 23:10:07.570811   28167 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:10:07.570829   28167 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:10:07.570961   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:10:07.571239   28167 start.go:360] acquireMachinesLock for ha-076508-m03: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:10:07.571306   28167 start.go:364] duration metric: took 38.243µs to acquireMachinesLock for "ha-076508-m03"
	I0803 23:10:07.571343   28167 start.go:93] Provisioning new machine with config: &{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:10:07.571460   28167 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0803 23:10:07.573238   28167 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:10:07.573404   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:10:07.573449   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:10:07.588630   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45481
	I0803 23:10:07.589135   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:10:07.589608   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:10:07.589630   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:10:07.590095   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:10:07.590298   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetMachineName
	I0803 23:10:07.590494   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:07.590706   28167 start.go:159] libmachine.API.Create for "ha-076508" (driver="kvm2")
	I0803 23:10:07.590740   28167 client.go:168] LocalClient.Create starting
	I0803 23:10:07.590785   28167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0803 23:10:07.590819   28167 main.go:141] libmachine: Decoding PEM data...
	I0803 23:10:07.590833   28167 main.go:141] libmachine: Parsing certificate...
	I0803 23:10:07.590884   28167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0803 23:10:07.590906   28167 main.go:141] libmachine: Decoding PEM data...
	I0803 23:10:07.590917   28167 main.go:141] libmachine: Parsing certificate...
	I0803 23:10:07.590932   28167 main.go:141] libmachine: Running pre-create checks...
	I0803 23:10:07.590940   28167 main.go:141] libmachine: (ha-076508-m03) Calling .PreCreateCheck
	I0803 23:10:07.591116   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetConfigRaw
	I0803 23:10:07.591656   28167 main.go:141] libmachine: Creating machine...
	I0803 23:10:07.591676   28167 main.go:141] libmachine: (ha-076508-m03) Calling .Create
	I0803 23:10:07.591831   28167 main.go:141] libmachine: (ha-076508-m03) Creating KVM machine...
	I0803 23:10:07.593193   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found existing default KVM network
	I0803 23:10:07.593326   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found existing private KVM network mk-ha-076508
	I0803 23:10:07.593471   28167 main.go:141] libmachine: (ha-076508-m03) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03 ...
	I0803 23:10:07.593532   28167 main.go:141] libmachine: (ha-076508-m03) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:10:07.593618   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:07.593489   29267 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:10:07.593710   28167 main.go:141] libmachine: (ha-076508-m03) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:10:07.827516   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:07.827348   29267 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa...
	I0803 23:10:07.977100   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:07.976988   29267 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/ha-076508-m03.rawdisk...
	I0803 23:10:07.977127   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Writing magic tar header
	I0803 23:10:07.977140   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Writing SSH key tar header
	I0803 23:10:07.977152   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:07.977109   29267 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03 ...
	I0803 23:10:07.977230   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03
	I0803 23:10:07.977253   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0803 23:10:07.977267   28167 main.go:141] libmachine: (ha-076508-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03 (perms=drwx------)
	I0803 23:10:07.977281   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:10:07.977292   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0803 23:10:07.977300   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:10:07.977308   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:10:07.977315   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home
	I0803 23:10:07.977325   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Skipping /home - not owner
	I0803 23:10:07.977376   28167 main.go:141] libmachine: (ha-076508-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:10:07.977394   28167 main.go:141] libmachine: (ha-076508-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0803 23:10:07.977407   28167 main.go:141] libmachine: (ha-076508-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0803 23:10:07.977421   28167 main.go:141] libmachine: (ha-076508-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:10:07.977436   28167 main.go:141] libmachine: (ha-076508-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:10:07.977455   28167 main.go:141] libmachine: (ha-076508-m03) Creating domain...
	I0803 23:10:07.978394   28167 main.go:141] libmachine: (ha-076508-m03) define libvirt domain using xml: 
	I0803 23:10:07.978411   28167 main.go:141] libmachine: (ha-076508-m03) <domain type='kvm'>
	I0803 23:10:07.978419   28167 main.go:141] libmachine: (ha-076508-m03)   <name>ha-076508-m03</name>
	I0803 23:10:07.978432   28167 main.go:141] libmachine: (ha-076508-m03)   <memory unit='MiB'>2200</memory>
	I0803 23:10:07.978443   28167 main.go:141] libmachine: (ha-076508-m03)   <vcpu>2</vcpu>
	I0803 23:10:07.978455   28167 main.go:141] libmachine: (ha-076508-m03)   <features>
	I0803 23:10:07.978464   28167 main.go:141] libmachine: (ha-076508-m03)     <acpi/>
	I0803 23:10:07.978475   28167 main.go:141] libmachine: (ha-076508-m03)     <apic/>
	I0803 23:10:07.978486   28167 main.go:141] libmachine: (ha-076508-m03)     <pae/>
	I0803 23:10:07.978495   28167 main.go:141] libmachine: (ha-076508-m03)     
	I0803 23:10:07.978504   28167 main.go:141] libmachine: (ha-076508-m03)   </features>
	I0803 23:10:07.978514   28167 main.go:141] libmachine: (ha-076508-m03)   <cpu mode='host-passthrough'>
	I0803 23:10:07.978538   28167 main.go:141] libmachine: (ha-076508-m03)   
	I0803 23:10:07.978557   28167 main.go:141] libmachine: (ha-076508-m03)   </cpu>
	I0803 23:10:07.978563   28167 main.go:141] libmachine: (ha-076508-m03)   <os>
	I0803 23:10:07.978569   28167 main.go:141] libmachine: (ha-076508-m03)     <type>hvm</type>
	I0803 23:10:07.978575   28167 main.go:141] libmachine: (ha-076508-m03)     <boot dev='cdrom'/>
	I0803 23:10:07.978586   28167 main.go:141] libmachine: (ha-076508-m03)     <boot dev='hd'/>
	I0803 23:10:07.978592   28167 main.go:141] libmachine: (ha-076508-m03)     <bootmenu enable='no'/>
	I0803 23:10:07.978598   28167 main.go:141] libmachine: (ha-076508-m03)   </os>
	I0803 23:10:07.978604   28167 main.go:141] libmachine: (ha-076508-m03)   <devices>
	I0803 23:10:07.978614   28167 main.go:141] libmachine: (ha-076508-m03)     <disk type='file' device='cdrom'>
	I0803 23:10:07.978626   28167 main.go:141] libmachine: (ha-076508-m03)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/boot2docker.iso'/>
	I0803 23:10:07.978632   28167 main.go:141] libmachine: (ha-076508-m03)       <target dev='hdc' bus='scsi'/>
	I0803 23:10:07.978638   28167 main.go:141] libmachine: (ha-076508-m03)       <readonly/>
	I0803 23:10:07.978644   28167 main.go:141] libmachine: (ha-076508-m03)     </disk>
	I0803 23:10:07.978650   28167 main.go:141] libmachine: (ha-076508-m03)     <disk type='file' device='disk'>
	I0803 23:10:07.978665   28167 main.go:141] libmachine: (ha-076508-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:10:07.978675   28167 main.go:141] libmachine: (ha-076508-m03)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/ha-076508-m03.rawdisk'/>
	I0803 23:10:07.978683   28167 main.go:141] libmachine: (ha-076508-m03)       <target dev='hda' bus='virtio'/>
	I0803 23:10:07.978689   28167 main.go:141] libmachine: (ha-076508-m03)     </disk>
	I0803 23:10:07.978698   28167 main.go:141] libmachine: (ha-076508-m03)     <interface type='network'>
	I0803 23:10:07.978710   28167 main.go:141] libmachine: (ha-076508-m03)       <source network='mk-ha-076508'/>
	I0803 23:10:07.978719   28167 main.go:141] libmachine: (ha-076508-m03)       <model type='virtio'/>
	I0803 23:10:07.978743   28167 main.go:141] libmachine: (ha-076508-m03)     </interface>
	I0803 23:10:07.978760   28167 main.go:141] libmachine: (ha-076508-m03)     <interface type='network'>
	I0803 23:10:07.978769   28167 main.go:141] libmachine: (ha-076508-m03)       <source network='default'/>
	I0803 23:10:07.978777   28167 main.go:141] libmachine: (ha-076508-m03)       <model type='virtio'/>
	I0803 23:10:07.978792   28167 main.go:141] libmachine: (ha-076508-m03)     </interface>
	I0803 23:10:07.978808   28167 main.go:141] libmachine: (ha-076508-m03)     <serial type='pty'>
	I0803 23:10:07.978822   28167 main.go:141] libmachine: (ha-076508-m03)       <target port='0'/>
	I0803 23:10:07.978832   28167 main.go:141] libmachine: (ha-076508-m03)     </serial>
	I0803 23:10:07.978843   28167 main.go:141] libmachine: (ha-076508-m03)     <console type='pty'>
	I0803 23:10:07.978849   28167 main.go:141] libmachine: (ha-076508-m03)       <target type='serial' port='0'/>
	I0803 23:10:07.978855   28167 main.go:141] libmachine: (ha-076508-m03)     </console>
	I0803 23:10:07.978868   28167 main.go:141] libmachine: (ha-076508-m03)     <rng model='virtio'>
	I0803 23:10:07.978883   28167 main.go:141] libmachine: (ha-076508-m03)       <backend model='random'>/dev/random</backend>
	I0803 23:10:07.978892   28167 main.go:141] libmachine: (ha-076508-m03)     </rng>
	I0803 23:10:07.978915   28167 main.go:141] libmachine: (ha-076508-m03)     
	I0803 23:10:07.978933   28167 main.go:141] libmachine: (ha-076508-m03)     
	I0803 23:10:07.978943   28167 main.go:141] libmachine: (ha-076508-m03)   </devices>
	I0803 23:10:07.978951   28167 main.go:141] libmachine: (ha-076508-m03) </domain>
	I0803 23:10:07.978966   28167 main.go:141] libmachine: (ha-076508-m03) 
	I0803 23:10:07.987006   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:e0:40:7d in network default
	I0803 23:10:07.987567   28167 main.go:141] libmachine: (ha-076508-m03) Ensuring networks are active...
	I0803 23:10:07.987583   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:07.988584   28167 main.go:141] libmachine: (ha-076508-m03) Ensuring network default is active
	I0803 23:10:07.988860   28167 main.go:141] libmachine: (ha-076508-m03) Ensuring network mk-ha-076508 is active
	I0803 23:10:07.989256   28167 main.go:141] libmachine: (ha-076508-m03) Getting domain xml...
	I0803 23:10:07.990103   28167 main.go:141] libmachine: (ha-076508-m03) Creating domain...
	I0803 23:10:09.248349   28167 main.go:141] libmachine: (ha-076508-m03) Waiting to get IP...
	I0803 23:10:09.249200   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:09.249636   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:09.249689   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:09.249618   29267 retry.go:31] will retry after 285.933143ms: waiting for machine to come up
	I0803 23:10:09.537243   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:09.537744   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:09.537770   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:09.537704   29267 retry.go:31] will retry after 249.301407ms: waiting for machine to come up
	I0803 23:10:09.788109   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:09.788657   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:09.788686   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:09.788601   29267 retry.go:31] will retry after 335.559043ms: waiting for machine to come up
	I0803 23:10:10.126156   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:10.126620   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:10.126650   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:10.126554   29267 retry.go:31] will retry after 474.638702ms: waiting for machine to come up
	I0803 23:10:10.602678   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:10.603108   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:10.603133   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:10.603061   29267 retry.go:31] will retry after 685.693379ms: waiting for machine to come up
	I0803 23:10:11.289879   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:11.290287   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:11.290313   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:11.290238   29267 retry.go:31] will retry after 607.834329ms: waiting for machine to come up
	I0803 23:10:11.899542   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:11.899975   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:11.900003   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:11.899920   29267 retry.go:31] will retry after 1.161412916s: waiting for machine to come up
	I0803 23:10:13.063410   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:13.063935   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:13.063964   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:13.063899   29267 retry.go:31] will retry after 1.250338083s: waiting for machine to come up
	I0803 23:10:14.315473   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:14.315910   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:14.315938   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:14.315861   29267 retry.go:31] will retry after 1.544589706s: waiting for machine to come up
	I0803 23:10:15.862400   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:15.862856   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:15.862873   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:15.862829   29267 retry.go:31] will retry after 1.643124459s: waiting for machine to come up
	I0803 23:10:17.507142   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:17.507682   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:17.507708   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:17.507633   29267 retry.go:31] will retry after 2.036118191s: waiting for machine to come up
	I0803 23:10:19.546457   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:19.547016   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:19.547064   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:19.546973   29267 retry.go:31] will retry after 2.436825652s: waiting for machine to come up
	I0803 23:10:21.986604   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:21.987159   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:21.987185   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:21.987096   29267 retry.go:31] will retry after 3.233370764s: waiting for machine to come up
	I0803 23:10:25.223298   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:25.223812   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:25.223835   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:25.223775   29267 retry.go:31] will retry after 4.665419653s: waiting for machine to come up
	I0803 23:10:29.890441   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:29.890851   28167 main.go:141] libmachine: (ha-076508-m03) Found IP for machine: 192.168.39.86
	I0803 23:10:29.890873   28167 main.go:141] libmachine: (ha-076508-m03) Reserving static IP address...
	I0803 23:10:29.890889   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has current primary IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:29.891328   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find host DHCP lease matching {name: "ha-076508-m03", mac: "52:54:00:f0:20:c2", ip: "192.168.39.86"} in network mk-ha-076508
	I0803 23:10:29.968716   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Getting to WaitForSSH function...
	I0803 23:10:29.968773   28167 main.go:141] libmachine: (ha-076508-m03) Reserved static IP address: 192.168.39.86
	I0803 23:10:29.968789   28167 main.go:141] libmachine: (ha-076508-m03) Waiting for SSH to be available...
	I0803 23:10:29.971322   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:29.971833   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:29.971860   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:29.972036   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Using SSH client type: external
	I0803 23:10:29.972061   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa (-rw-------)
	I0803 23:10:29.972099   28167 main.go:141] libmachine: (ha-076508-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:10:29.972119   28167 main.go:141] libmachine: (ha-076508-m03) DBG | About to run SSH command:
	I0803 23:10:29.972130   28167 main.go:141] libmachine: (ha-076508-m03) DBG | exit 0
	I0803 23:10:30.097458   28167 main.go:141] libmachine: (ha-076508-m03) DBG | SSH cmd err, output: <nil>: 
	I0803 23:10:30.097656   28167 main.go:141] libmachine: (ha-076508-m03) KVM machine creation complete!
	I0803 23:10:30.098051   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetConfigRaw
	I0803 23:10:30.098550   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:30.098752   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:30.098895   28167 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:10:30.098911   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetState
	I0803 23:10:30.100080   28167 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:10:30.100103   28167 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:10:30.100111   28167 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:10:30.100117   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.102661   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.103076   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.103106   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.103226   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:30.103431   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.103588   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.103724   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:30.103874   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:10:30.104109   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0803 23:10:30.104123   28167 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:10:30.208714   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:10:30.208741   28167 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:10:30.208752   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.211697   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.212050   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.212080   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.212250   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:30.212429   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.212596   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.212772   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:30.212933   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:10:30.213132   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0803 23:10:30.213150   28167 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:10:30.314316   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:10:30.314413   28167 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:10:30.314425   28167 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:10:30.314441   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetMachineName
	I0803 23:10:30.314709   28167 buildroot.go:166] provisioning hostname "ha-076508-m03"
	I0803 23:10:30.314739   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetMachineName
	I0803 23:10:30.314975   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.317995   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.318447   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.318470   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.318551   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:30.318747   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.318921   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.319069   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:30.319229   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:10:30.319432   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0803 23:10:30.319448   28167 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076508-m03 && echo "ha-076508-m03" | sudo tee /etc/hostname
	I0803 23:10:30.435894   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076508-m03
	
	I0803 23:10:30.435924   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.438653   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.438955   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.438980   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.439130   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:30.439306   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.439468   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.439639   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:30.439815   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:10:30.440025   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0803 23:10:30.440043   28167 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076508-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076508-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076508-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:10:30.551603   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:10:30.551635   28167 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 23:10:30.551651   28167 buildroot.go:174] setting up certificates
	I0803 23:10:30.551660   28167 provision.go:84] configureAuth start
	I0803 23:10:30.551668   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetMachineName
	I0803 23:10:30.551973   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:10:30.554323   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.554598   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.554650   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.554762   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.556944   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.557367   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.557395   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.557474   28167 provision.go:143] copyHostCerts
	I0803 23:10:30.557505   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:10:30.557541   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0803 23:10:30.557550   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:10:30.557610   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 23:10:30.557690   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:10:30.557709   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0803 23:10:30.557713   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:10:30.557741   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 23:10:30.557819   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:10:30.557839   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0803 23:10:30.557843   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:10:30.557866   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 23:10:30.557913   28167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.ha-076508-m03 san=[127.0.0.1 192.168.39.86 ha-076508-m03 localhost minikube]
	I0803 23:10:30.655066   28167 provision.go:177] copyRemoteCerts
	I0803 23:10:30.655117   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:10:30.655138   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.657642   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.657986   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.658015   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.658268   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:30.658485   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.658623   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:30.658764   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:10:30.740302   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:10:30.740367   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 23:10:30.766773   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:10:30.766854   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0803 23:10:30.794641   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:10:30.794705   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0803 23:10:30.821626   28167 provision.go:87] duration metric: took 269.952761ms to configureAuth
	I0803 23:10:30.821653   28167 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:10:30.821926   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:10:30.822025   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.825020   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.825452   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.825483   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.825722   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:30.825965   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.826144   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.826280   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:30.826430   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:10:30.826598   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0803 23:10:30.826612   28167 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:10:31.111311   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:10:31.111343   28167 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:10:31.111355   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetURL
	I0803 23:10:31.112737   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Using libvirt version 6000000
	I0803 23:10:31.115452   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.115868   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.115897   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.116133   28167 main.go:141] libmachine: Docker is up and running!
	I0803 23:10:31.116146   28167 main.go:141] libmachine: Reticulating splines...
	I0803 23:10:31.116152   28167 client.go:171] duration metric: took 23.525402572s to LocalClient.Create
	I0803 23:10:31.116173   28167 start.go:167] duration metric: took 23.52546941s to libmachine.API.Create "ha-076508"
	I0803 23:10:31.116188   28167 start.go:293] postStartSetup for "ha-076508-m03" (driver="kvm2")
	I0803 23:10:31.116200   28167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:10:31.116216   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:31.116431   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:10:31.116452   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:31.118369   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.118630   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.118657   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.118808   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:31.118971   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:31.119164   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:31.119312   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:10:31.200460   28167 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:10:31.205806   28167 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:10:31.205840   28167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 23:10:31.205987   28167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 23:10:31.206177   28167 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0803 23:10:31.206194   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0803 23:10:31.206305   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:10:31.218211   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:10:31.244937   28167 start.go:296] duration metric: took 128.728685ms for postStartSetup
	I0803 23:10:31.245009   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetConfigRaw
	I0803 23:10:31.245627   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:10:31.248661   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.249046   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.249067   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.249472   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:10:31.249715   28167 start.go:128] duration metric: took 23.678244602s to createHost
	I0803 23:10:31.249756   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:31.252488   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.252922   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.252953   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.253184   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:31.253406   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:31.253616   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:31.253794   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:31.253975   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:10:31.254174   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0803 23:10:31.254190   28167 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:10:31.354522   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722726631.327455448
	
	I0803 23:10:31.354546   28167 fix.go:216] guest clock: 1722726631.327455448
	I0803 23:10:31.354556   28167 fix.go:229] Guest: 2024-08-03 23:10:31.327455448 +0000 UTC Remote: 2024-08-03 23:10:31.249737563 +0000 UTC m=+223.792678543 (delta=77.717885ms)
	I0803 23:10:31.354580   28167 fix.go:200] guest clock delta is within tolerance: 77.717885ms
	I0803 23:10:31.354587   28167 start.go:83] releasing machines lock for "ha-076508-m03", held for 23.783271299s
	I0803 23:10:31.354611   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:31.354933   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:10:31.358012   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.358446   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.358474   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.360942   28167 out.go:177] * Found network options:
	I0803 23:10:31.362445   28167 out.go:177]   - NO_PROXY=192.168.39.154,192.168.39.245
	W0803 23:10:31.363709   28167 proxy.go:119] fail to check proxy env: Error ip not in block
	W0803 23:10:31.363733   28167 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:10:31.363747   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:31.364321   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:31.364504   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:31.364600   28167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:10:31.364632   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	W0803 23:10:31.364726   28167 proxy.go:119] fail to check proxy env: Error ip not in block
	W0803 23:10:31.364750   28167 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:10:31.364853   28167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:10:31.364875   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:31.367662   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.367686   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.368094   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.368129   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.368158   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.368185   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.368295   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:31.368415   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:31.368462   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:31.368541   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:31.368611   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:31.368672   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:31.368724   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:10:31.368783   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:10:31.614534   28167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:10:31.621221   28167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:10:31.621279   28167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:10:31.640586   28167 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:10:31.640614   28167 start.go:495] detecting cgroup driver to use...
	I0803 23:10:31.640697   28167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:10:31.661027   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:10:31.677884   28167 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:10:31.677966   28167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:10:31.694226   28167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:10:31.708499   28167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:10:31.825583   28167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:10:31.997400   28167 docker.go:233] disabling docker service ...
	I0803 23:10:31.997472   28167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:10:32.012727   28167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:10:32.026457   28167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:10:32.154114   28167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:10:32.278557   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:10:32.295162   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:10:32.315162   28167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:10:32.315252   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.326283   28167 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:10:32.326343   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.338237   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.349853   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.361904   28167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:10:32.373916   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.385926   28167 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.406945   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.421449   28167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:10:32.431888   28167 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:10:32.431957   28167 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:10:32.448988   28167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:10:32.459616   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:10:32.588640   28167 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:10:32.726397   28167 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:10:32.726470   28167 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:10:32.731304   28167 start.go:563] Will wait 60s for crictl version
	I0803 23:10:32.731349   28167 ssh_runner.go:195] Run: which crictl
	I0803 23:10:32.735182   28167 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:10:32.774180   28167 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:10:32.774271   28167 ssh_runner.go:195] Run: crio --version
	I0803 23:10:32.804446   28167 ssh_runner.go:195] Run: crio --version
	I0803 23:10:32.836356   28167 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:10:32.837737   28167 out.go:177]   - env NO_PROXY=192.168.39.154
	I0803 23:10:32.838985   28167 out.go:177]   - env NO_PROXY=192.168.39.154,192.168.39.245
	I0803 23:10:32.840314   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:10:32.843214   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:32.843728   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:32.843754   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:32.843977   28167 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:10:32.848385   28167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:10:32.861630   28167 mustload.go:65] Loading cluster: ha-076508
	I0803 23:10:32.861891   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:10:32.862154   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:10:32.862192   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:10:32.877838   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43811
	I0803 23:10:32.878216   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:10:32.878763   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:10:32.878783   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:10:32.879142   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:10:32.879328   28167 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:10:32.880742   28167 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:10:32.881034   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:10:32.881066   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:10:32.896078   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45901
	I0803 23:10:32.896488   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:10:32.896941   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:10:32.896964   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:10:32.897260   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:10:32.897452   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:10:32.897618   28167 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508 for IP: 192.168.39.86
	I0803 23:10:32.897629   28167 certs.go:194] generating shared ca certs ...
	I0803 23:10:32.897645   28167 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:10:32.897787   28167 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 23:10:32.897840   28167 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 23:10:32.897857   28167 certs.go:256] generating profile certs ...
	I0803 23:10:32.897967   28167 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key
	I0803 23:10:32.897998   28167 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.f5620537
	I0803 23:10:32.898022   28167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.f5620537 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.154 192.168.39.245 192.168.39.86 192.168.39.254]
	I0803 23:10:33.154134   28167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.f5620537 ...
	I0803 23:10:33.154168   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.f5620537: {Name:mk682d6ecfb96dbed7a4b277a1a22d21b911660e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:10:33.154392   28167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.f5620537 ...
	I0803 23:10:33.154411   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.f5620537: {Name:mk9319630d2a2fe9289f3b8bdf9a93cb217ef0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:10:33.154554   28167 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.f5620537 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt
	I0803 23:10:33.154758   28167 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.f5620537 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key
	I0803 23:10:33.154959   28167 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key
	I0803 23:10:33.154984   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:10:33.155009   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:10:33.155032   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:10:33.155055   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:10:33.155074   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:10:33.155097   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:10:33.155121   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:10:33.155142   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:10:33.155214   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0803 23:10:33.155258   28167 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0803 23:10:33.155274   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 23:10:33.155308   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 23:10:33.155347   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:10:33.155387   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 23:10:33.155450   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:10:33.155493   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:10:33.155516   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0803 23:10:33.155538   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0803 23:10:33.155585   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:10:33.158701   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:10:33.159165   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:10:33.159195   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:10:33.159391   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:10:33.159619   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:10:33.159818   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:10:33.159994   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:10:33.241782   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0803 23:10:33.247573   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0803 23:10:33.261083   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0803 23:10:33.265717   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0803 23:10:33.278224   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0803 23:10:33.283382   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0803 23:10:33.295572   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0803 23:10:33.300485   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0803 23:10:33.320637   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0803 23:10:33.325493   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0803 23:10:33.339599   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0803 23:10:33.345306   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0803 23:10:33.358174   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:10:33.386515   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:10:33.413013   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:10:33.437564   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 23:10:33.462725   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0803 23:10:33.488324   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 23:10:33.516049   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:10:33.545077   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:10:33.571339   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:10:33.598911   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0803 23:10:33.624365   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0803 23:10:33.650645   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0803 23:10:33.668748   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0803 23:10:33.685967   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0803 23:10:33.702890   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0803 23:10:33.720867   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0803 23:10:33.738443   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0803 23:10:33.756919   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0803 23:10:33.774564   28167 ssh_runner.go:195] Run: openssl version
	I0803 23:10:33.781110   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:10:33.793528   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:10:33.798625   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:10:33.798691   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:10:33.804731   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:10:33.815861   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0803 23:10:33.828021   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0803 23:10:33.833448   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:10:33.833508   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0803 23:10:33.839587   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0803 23:10:33.850838   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0803 23:10:33.862244   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0803 23:10:33.867289   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:10:33.867355   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0803 23:10:33.873268   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:10:33.885300   28167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:10:33.890043   28167 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:10:33.890102   28167 kubeadm.go:934] updating node {m03 192.168.39.86 8443 v1.30.3 crio true true} ...
	I0803 23:10:33.890215   28167 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076508-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:10:33.890247   28167 kube-vip.go:115] generating kube-vip config ...
	I0803 23:10:33.890296   28167 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:10:33.908038   28167 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:10:33.908123   28167 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:10:33.908185   28167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:10:33.918317   28167 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0803 23:10:33.918388   28167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0803 23:10:33.928855   28167 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0803 23:10:33.928882   28167 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0803 23:10:33.928899   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:10:33.928908   28167 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0803 23:10:33.928925   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:10:33.928929   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:10:33.928974   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:10:33.928988   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:10:33.935130   28167 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0803 23:10:33.935162   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0803 23:10:33.967603   28167 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0803 23:10:33.967652   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0803 23:10:33.967667   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:10:33.967745   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:10:34.015821   28167 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0803 23:10:34.015864   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0803 23:10:34.859585   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0803 23:10:34.870219   28167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0803 23:10:34.889025   28167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:10:34.908786   28167 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0803 23:10:34.929133   28167 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:10:34.933650   28167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:10:34.947347   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:10:35.076750   28167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:10:35.095743   28167 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:10:35.096190   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:10:35.096239   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:10:35.112256   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41077
	I0803 23:10:35.112710   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:10:35.113188   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:10:35.113215   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:10:35.113579   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:10:35.113806   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:10:35.114009   28167 start.go:317] joinCluster: &{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:10:35.114170   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0803 23:10:35.114187   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:10:35.117526   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:10:35.118039   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:10:35.118085   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:10:35.118254   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:10:35.118445   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:10:35.118633   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:10:35.118784   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:10:35.296853   28167 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:10:35.296901   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9kwipv.ic5tyi0dwv1kfzk7 --discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076508-m03 --control-plane --apiserver-advertise-address=192.168.39.86 --apiserver-bind-port=8443"
	I0803 23:10:57.974189   28167 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9kwipv.ic5tyi0dwv1kfzk7 --discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076508-m03 --control-plane --apiserver-advertise-address=192.168.39.86 --apiserver-bind-port=8443": (22.677248756s)
	I0803 23:10:57.974236   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0803 23:10:58.624270   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076508-m03 minikube.k8s.io/updated_at=2024_08_03T23_10_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=ha-076508 minikube.k8s.io/primary=false
	I0803 23:10:58.762992   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-076508-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0803 23:10:58.902499   28167 start.go:319] duration metric: took 23.788486948s to joinCluster
	I0803 23:10:58.902601   28167 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:10:58.902952   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:10:58.904243   28167 out.go:177] * Verifying Kubernetes components...
	I0803 23:10:58.905653   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:10:59.196121   28167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:10:59.238713   28167 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:10:59.238967   28167 kapi.go:59] client config for ha-076508: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key", CAFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0803 23:10:59.239048   28167 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.154:8443
	I0803 23:10:59.239366   28167 node_ready.go:35] waiting up to 6m0s for node "ha-076508-m03" to be "Ready" ...
	I0803 23:10:59.239462   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:10:59.239473   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:59.239483   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:59.239490   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:59.243182   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:59.739904   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:10:59.739927   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:59.739938   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:59.739944   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:59.743828   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:00.239826   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:00.239851   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:00.239861   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:00.239866   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:00.247137   28167 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0803 23:11:00.740151   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:00.740179   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:00.740188   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:00.740192   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:00.744188   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:01.240343   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:01.240365   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:01.240373   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:01.240377   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:01.244256   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:01.245076   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:01.740544   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:01.740565   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:01.740573   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:01.740578   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:01.744202   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:02.240319   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:02.240339   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:02.240347   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:02.240351   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:02.244191   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:02.739893   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:02.739920   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:02.739932   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:02.739937   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:02.743986   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:03.240246   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:03.240272   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:03.240286   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:03.240298   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:03.243994   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:03.739696   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:03.739717   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:03.739725   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:03.739730   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:03.743510   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:03.744170   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:04.239725   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:04.239744   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:04.239753   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:04.239757   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:04.244068   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:04.740393   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:04.740414   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:04.740422   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:04.740426   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:04.743938   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:05.239873   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:05.239901   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:05.239911   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:05.239916   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:05.243398   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:05.740403   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:05.740423   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:05.740431   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:05.740434   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:05.744314   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:05.745066   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:06.240357   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:06.240379   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:06.240387   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:06.240390   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:06.244366   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:06.740288   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:06.740311   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:06.740320   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:06.740323   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:06.743678   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:07.240322   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:07.240351   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:07.240361   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:07.240369   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:07.244184   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:07.740196   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:07.740219   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:07.740228   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:07.740231   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:07.743629   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:08.239630   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:08.239653   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:08.239663   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:08.239667   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:08.243194   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:08.243884   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:08.740350   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:08.740377   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:08.740387   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:08.740394   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:08.743980   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:09.239860   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:09.239881   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:09.239892   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:09.239897   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:09.243737   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:09.739830   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:09.739851   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:09.739858   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:09.739861   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:09.743539   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:10.240370   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:10.240391   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:10.240399   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:10.240402   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:10.243764   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:10.244359   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:10.740558   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:10.740579   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:10.740587   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:10.740591   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:10.744298   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:11.239575   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:11.239597   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:11.239606   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:11.239610   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:11.243363   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:11.739985   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:11.740007   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:11.740015   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:11.740020   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:11.743671   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:12.239986   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:12.240009   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:12.240017   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:12.240022   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:12.243616   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:12.740619   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:12.740642   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:12.740652   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:12.740660   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:12.744548   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:12.745257   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:13.240499   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:13.240520   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:13.240528   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:13.240532   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:13.244373   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:13.739833   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:13.739855   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:13.739865   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:13.739871   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:13.745530   28167 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:11:14.239857   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:14.239886   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:14.239895   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:14.239902   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:14.243621   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:14.739715   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:14.739735   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:14.739745   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:14.739750   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:14.743484   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:15.240627   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:15.240652   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:15.240664   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:15.240670   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:15.244239   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:15.244932   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:15.740084   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:15.740109   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:15.740117   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:15.740121   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:15.743654   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:16.239580   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:16.239599   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:16.239608   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:16.239612   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:16.243332   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:16.740428   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:16.740449   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:16.740457   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:16.740462   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:16.743902   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:17.240042   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:17.240065   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.240075   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.240081   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.244451   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:17.245084   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:17.739546   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:17.739570   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.739582   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.739592   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.742972   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:17.743691   28167 node_ready.go:49] node "ha-076508-m03" has status "Ready":"True"
	I0803 23:11:17.743712   28167 node_ready.go:38] duration metric: took 18.504330252s for node "ha-076508-m03" to be "Ready" ...
	I0803 23:11:17.743722   28167 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:11:17.743799   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:11:17.743812   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.743822   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.743833   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.750270   28167 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:11:17.759237   28167 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g4nns" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.759337   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-g4nns
	I0803 23:11:17.759351   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.759362   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.759366   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.762579   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:17.763292   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:17.763308   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.763316   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.763320   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.766063   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.766587   28167 pod_ready.go:92] pod "coredns-7db6d8ff4d-g4nns" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:17.766604   28167 pod_ready.go:81] duration metric: took 7.337575ms for pod "coredns-7db6d8ff4d-g4nns" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.766612   28167 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm52b" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.766662   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jm52b
	I0803 23:11:17.766669   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.766676   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.766680   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.769067   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.769729   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:17.769742   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.769749   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.769754   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.772200   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.772747   28167 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm52b" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:17.772766   28167 pod_ready.go:81] duration metric: took 6.14586ms for pod "coredns-7db6d8ff4d-jm52b" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.772778   28167 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.772853   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508
	I0803 23:11:17.772863   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.772870   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.772874   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.775414   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.775905   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:17.775947   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.775966   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.775975   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.778731   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.779137   28167 pod_ready.go:92] pod "etcd-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:17.779155   28167 pod_ready.go:81] duration metric: took 6.368718ms for pod "etcd-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.779167   28167 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.779221   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508-m02
	I0803 23:11:17.779232   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.779243   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.779249   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.781797   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.782355   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:17.782379   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.782389   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.782396   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.784884   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.785468   28167 pod_ready.go:92] pod "etcd-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:17.785487   28167 pod_ready.go:81] duration metric: took 6.312132ms for pod "etcd-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.785500   28167 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.940418   28167 request.go:629] Waited for 154.860404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508-m03
	I0803 23:11:17.940479   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508-m03
	I0803 23:11:17.940486   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.940496   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.940502   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.943696   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:18.139691   28167 request.go:629] Waited for 195.313563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:18.139744   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:18.139749   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:18.139757   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:18.139761   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:18.144233   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:18.145050   28167 pod_ready.go:92] pod "etcd-ha-076508-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:18.145077   28167 pod_ready.go:81] duration metric: took 359.569179ms for pod "etcd-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:18.145099   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:18.340127   28167 request.go:629] Waited for 194.966235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508
	I0803 23:11:18.340186   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508
	I0803 23:11:18.340194   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:18.340201   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:18.340208   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:18.343635   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:18.539663   28167 request.go:629] Waited for 195.273755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:18.539711   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:18.539716   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:18.539723   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:18.539729   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:18.543681   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:18.544389   28167 pod_ready.go:92] pod "kube-apiserver-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:18.544416   28167 pod_ready.go:81] duration metric: took 399.307162ms for pod "kube-apiserver-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:18.544429   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:18.740437   28167 request.go:629] Waited for 195.929852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508-m02
	I0803 23:11:18.740507   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508-m02
	I0803 23:11:18.740514   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:18.740525   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:18.740532   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:18.743910   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:18.940148   28167 request.go:629] Waited for 195.390341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:18.940208   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:18.940214   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:18.940224   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:18.940232   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:18.943491   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:18.943930   28167 pod_ready.go:92] pod "kube-apiserver-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:18.943946   28167 pod_ready.go:81] duration metric: took 399.509056ms for pod "kube-apiserver-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:18.943955   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:19.140099   28167 request.go:629] Waited for 196.077883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508-m03
	I0803 23:11:19.140168   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508-m03
	I0803 23:11:19.140174   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:19.140181   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:19.140187   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:19.144623   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:19.339857   28167 request.go:629] Waited for 194.358756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:19.339922   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:19.339930   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:19.339940   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:19.339946   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:19.343508   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:19.344247   28167 pod_ready.go:92] pod "kube-apiserver-ha-076508-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:19.344266   28167 pod_ready.go:81] duration metric: took 400.304551ms for pod "kube-apiserver-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:19.344276   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:19.540358   28167 request.go:629] Waited for 196.023302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508
	I0803 23:11:19.540431   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508
	I0803 23:11:19.540438   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:19.540448   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:19.540458   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:19.544200   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:19.740324   28167 request.go:629] Waited for 195.356736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:19.740377   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:19.740382   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:19.740390   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:19.740394   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:19.743827   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:19.744761   28167 pod_ready.go:92] pod "kube-controller-manager-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:19.744778   28167 pod_ready.go:81] duration metric: took 400.494408ms for pod "kube-controller-manager-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:19.744792   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:19.940275   28167 request.go:629] Waited for 195.423466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508-m02
	I0803 23:11:19.940362   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508-m02
	I0803 23:11:19.940373   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:19.940384   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:19.940391   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:19.944244   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:20.140475   28167 request.go:629] Waited for 195.324746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:20.140541   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:20.140547   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:20.140557   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:20.140564   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:20.144353   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:20.145194   28167 pod_ready.go:92] pod "kube-controller-manager-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:20.145214   28167 pod_ready.go:81] duration metric: took 400.413105ms for pod "kube-controller-manager-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:20.145224   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:20.340356   28167 request.go:629] Waited for 195.046793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508-m03
	I0803 23:11:20.340418   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508-m03
	I0803 23:11:20.340425   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:20.340437   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:20.340449   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:20.343958   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:20.539967   28167 request.go:629] Waited for 195.367001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:20.540154   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:20.540175   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:20.540187   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:20.540192   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:20.543615   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:20.544156   28167 pod_ready.go:92] pod "kube-controller-manager-ha-076508-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:20.544177   28167 pod_ready.go:81] duration metric: took 398.945931ms for pod "kube-controller-manager-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:20.544190   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7kmfh" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:20.739937   28167 request.go:629] Waited for 195.685945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kmfh
	I0803 23:11:20.740015   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kmfh
	I0803 23:11:20.740024   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:20.740033   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:20.740041   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:20.743950   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:20.940068   28167 request.go:629] Waited for 195.366819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:20.940173   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:20.940194   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:20.940203   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:20.940211   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:20.943592   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:20.944210   28167 pod_ready.go:92] pod "kube-proxy-7kmfh" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:20.944230   28167 pod_ready.go:81] duration metric: took 400.028865ms for pod "kube-proxy-7kmfh" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:20.944243   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hkfgl" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:21.140332   28167 request.go:629] Waited for 196.016119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkfgl
	I0803 23:11:21.140411   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkfgl
	I0803 23:11:21.140424   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:21.140435   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:21.140441   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:21.144074   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:21.340110   28167 request.go:629] Waited for 195.174379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:21.340168   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:21.340173   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:21.340181   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:21.340188   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:21.343734   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:21.344378   28167 pod_ready.go:92] pod "kube-proxy-hkfgl" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:21.344396   28167 pod_ready.go:81] duration metric: took 400.141836ms for pod "kube-proxy-hkfgl" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:21.344406   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jvj96" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:21.540579   28167 request.go:629] Waited for 196.118535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jvj96
	I0803 23:11:21.540671   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jvj96
	I0803 23:11:21.540682   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:21.540694   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:21.540702   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:21.544883   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:21.740073   28167 request.go:629] Waited for 194.356205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:21.740133   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:21.740140   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:21.740151   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:21.740161   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:21.743418   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:21.744057   28167 pod_ready.go:92] pod "kube-proxy-jvj96" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:21.744072   28167 pod_ready.go:81] duration metric: took 399.661209ms for pod "kube-proxy-jvj96" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:21.744081   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:21.940238   28167 request.go:629] Waited for 196.090504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508
	I0803 23:11:21.940298   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508
	I0803 23:11:21.940306   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:21.940315   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:21.940321   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:21.943662   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:22.139622   28167 request.go:629] Waited for 195.274565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:22.139721   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:22.139734   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:22.139745   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:22.139753   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:22.144185   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:22.145072   28167 pod_ready.go:92] pod "kube-scheduler-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:22.145091   28167 pod_ready.go:81] duration metric: took 401.003535ms for pod "kube-scheduler-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:22.145100   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:22.339598   28167 request.go:629] Waited for 194.41909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508-m02
	I0803 23:11:22.339661   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508-m02
	I0803 23:11:22.339667   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:22.339674   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:22.339679   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:22.343092   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:22.539739   28167 request.go:629] Waited for 196.145646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:22.539808   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:22.539813   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:22.539820   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:22.539825   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:22.543323   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:22.543946   28167 pod_ready.go:92] pod "kube-scheduler-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:22.543965   28167 pod_ready.go:81] duration metric: took 398.855106ms for pod "kube-scheduler-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:22.543974   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:22.739975   28167 request.go:629] Waited for 195.945481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508-m03
	I0803 23:11:22.740046   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508-m03
	I0803 23:11:22.740054   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:22.740064   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:22.740074   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:22.743288   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:22.940026   28167 request.go:629] Waited for 195.97419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:22.940101   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:22.940107   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:22.940114   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:22.940121   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:22.943834   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:22.944320   28167 pod_ready.go:92] pod "kube-scheduler-ha-076508-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:22.944341   28167 pod_ready.go:81] duration metric: took 400.36089ms for pod "kube-scheduler-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:22.944354   28167 pod_ready.go:38] duration metric: took 5.200616734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:11:22.944371   28167 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:11:22.944420   28167 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:11:22.959767   28167 api_server.go:72] duration metric: took 24.05713082s to wait for apiserver process to appear ...
	I0803 23:11:22.959810   28167 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:11:22.959829   28167 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I0803 23:11:22.964677   28167 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
	ok
	I0803 23:11:22.964737   28167 round_trippers.go:463] GET https://192.168.39.154:8443/version
	I0803 23:11:22.964745   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:22.964752   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:22.964759   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:22.965925   28167 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0803 23:11:22.965986   28167 api_server.go:141] control plane version: v1.30.3
	I0803 23:11:22.965996   28167 api_server.go:131] duration metric: took 6.180078ms to wait for apiserver health ...
	I0803 23:11:22.966006   28167 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 23:11:23.140292   28167 request.go:629] Waited for 174.228703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:11:23.140372   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:11:23.140378   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:23.140385   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:23.140390   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:23.147378   28167 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:11:23.154675   28167 system_pods.go:59] 24 kube-system pods found
	I0803 23:11:23.154710   28167 system_pods.go:61] "coredns-7db6d8ff4d-g4nns" [1c9c7190-c993-4b51-8ba6-62e3ab513836] Running
	I0803 23:11:23.154717   28167 system_pods.go:61] "coredns-7db6d8ff4d-jm52b" [65abad67-6b05-4dbb-8d33-723306bee46f] Running
	I0803 23:11:23.154723   28167 system_pods.go:61] "etcd-ha-076508" [0d38d9a9-4f0f-4928-bd37-010dc1b7623e] Running
	I0803 23:11:23.154727   28167 system_pods.go:61] "etcd-ha-076508-m02" [b473f99f-7b7c-42a2-affc-69b5305ae2e2] Running
	I0803 23:11:23.154732   28167 system_pods.go:61] "etcd-ha-076508-m03" [e13f1f48-0494-4c42-852b-34bb56b06d64] Running
	I0803 23:11:23.154737   28167 system_pods.go:61] "kindnet-bpdht" [156017b0-941c-4b32-a73c-4798d48e5434] Running
	I0803 23:11:23.154742   28167 system_pods.go:61] "kindnet-kw254" [fd80828b-1c0f-4a0d-a5d0-f25501e65fd9] Running
	I0803 23:11:23.154746   28167 system_pods.go:61] "kindnet-tzzq4" [42e5000f-b60a-404c-9e0a-0a414d305d03] Running
	I0803 23:11:23.154751   28167 system_pods.go:61] "kube-apiserver-ha-076508" [975ea5b3-4598-438a-99c6-8c2b644a714b] Running
	I0803 23:11:23.154757   28167 system_pods.go:61] "kube-apiserver-ha-076508-m02" [fdaa8b75-c8a4-444c-9288-6aaec5b31834] Running
	I0803 23:11:23.154765   28167 system_pods.go:61] "kube-apiserver-ha-076508-m03" [035ef875-a6d9-40c6-982e-8fe6200ab98e] Running
	I0803 23:11:23.154774   28167 system_pods.go:61] "kube-controller-manager-ha-076508" [3517b4d5-b6b3-4d39-9f4a-1b8c0ceae246] Running
	I0803 23:11:23.154779   28167 system_pods.go:61] "kube-controller-manager-ha-076508-m02" [f13130bb-619b-475f-ab9d-61422ca1a08b] Running
	I0803 23:11:23.154787   28167 system_pods.go:61] "kube-controller-manager-ha-076508-m03" [108437fc-1c9a-4729-9d08-ebaf35e67bad] Running
	I0803 23:11:23.154791   28167 system_pods.go:61] "kube-proxy-7kmfh" [5bc5276d-480b-4c95-b6c2-0cbb2898d290] Running
	I0803 23:11:23.154796   28167 system_pods.go:61] "kube-proxy-hkfgl" [31dca27d-663b-4bfa-8921-547686985835] Running
	I0803 23:11:23.154801   28167 system_pods.go:61] "kube-proxy-jvj96" [cdb6273b-31a8-48bc-8c5a-010363fc2a96] Running
	I0803 23:11:23.154807   28167 system_pods.go:61] "kube-scheduler-ha-076508" [63e9b52f-c7e8-4812-a666-284b2d383067] Running
	I0803 23:11:23.154813   28167 system_pods.go:61] "kube-scheduler-ha-076508-m02" [47cb368b-42e7-44f0-b1b1-40521064569b] Running
	I0803 23:11:23.154820   28167 system_pods.go:61] "kube-scheduler-ha-076508-m03" [ead599ec-1d46-4457-850d-d189b57597c5] Running
	I0803 23:11:23.154825   28167 system_pods.go:61] "kube-vip-ha-076508" [f0640d14-d8df-4fe5-8265-4f1215c2e881] Running
	I0803 23:11:23.154831   28167 system_pods.go:61] "kube-vip-ha-076508-m02" [0e1a3c8d-c1d4-4c29-b674-f13a62d2471c] Running
	I0803 23:11:23.154835   28167 system_pods.go:61] "kube-vip-ha-076508-m03" [61ffbdc1-4caa-450c-8c00-29bca8fccd59] Running
	I0803 23:11:23.154842   28167 system_pods.go:61] "storage-provisioner" [c98f9062-eff5-48e1-b260-7e8acf8df124] Running
	I0803 23:11:23.154848   28167 system_pods.go:74] duration metric: took 188.836543ms to wait for pod list to return data ...
	I0803 23:11:23.154858   28167 default_sa.go:34] waiting for default service account to be created ...
	I0803 23:11:23.340268   28167 request.go:629] Waited for 185.343692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:11:23.340326   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:11:23.340333   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:23.340344   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:23.340349   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:23.343708   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:23.343878   28167 default_sa.go:45] found service account: "default"
	I0803 23:11:23.343901   28167 default_sa.go:55] duration metric: took 189.036567ms for default service account to be created ...
	I0803 23:11:23.343911   28167 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 23:11:23.540341   28167 request.go:629] Waited for 196.355004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:11:23.540410   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:11:23.540419   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:23.540429   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:23.540439   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:23.547038   28167 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:11:23.553622   28167 system_pods.go:86] 24 kube-system pods found
	I0803 23:11:23.553643   28167 system_pods.go:89] "coredns-7db6d8ff4d-g4nns" [1c9c7190-c993-4b51-8ba6-62e3ab513836] Running
	I0803 23:11:23.553649   28167 system_pods.go:89] "coredns-7db6d8ff4d-jm52b" [65abad67-6b05-4dbb-8d33-723306bee46f] Running
	I0803 23:11:23.553653   28167 system_pods.go:89] "etcd-ha-076508" [0d38d9a9-4f0f-4928-bd37-010dc1b7623e] Running
	I0803 23:11:23.553657   28167 system_pods.go:89] "etcd-ha-076508-m02" [b473f99f-7b7c-42a2-affc-69b5305ae2e2] Running
	I0803 23:11:23.553662   28167 system_pods.go:89] "etcd-ha-076508-m03" [e13f1f48-0494-4c42-852b-34bb56b06d64] Running
	I0803 23:11:23.553665   28167 system_pods.go:89] "kindnet-bpdht" [156017b0-941c-4b32-a73c-4798d48e5434] Running
	I0803 23:11:23.553669   28167 system_pods.go:89] "kindnet-kw254" [fd80828b-1c0f-4a0d-a5d0-f25501e65fd9] Running
	I0803 23:11:23.553673   28167 system_pods.go:89] "kindnet-tzzq4" [42e5000f-b60a-404c-9e0a-0a414d305d03] Running
	I0803 23:11:23.553677   28167 system_pods.go:89] "kube-apiserver-ha-076508" [975ea5b3-4598-438a-99c6-8c2b644a714b] Running
	I0803 23:11:23.553682   28167 system_pods.go:89] "kube-apiserver-ha-076508-m02" [fdaa8b75-c8a4-444c-9288-6aaec5b31834] Running
	I0803 23:11:23.553688   28167 system_pods.go:89] "kube-apiserver-ha-076508-m03" [035ef875-a6d9-40c6-982e-8fe6200ab98e] Running
	I0803 23:11:23.553693   28167 system_pods.go:89] "kube-controller-manager-ha-076508" [3517b4d5-b6b3-4d39-9f4a-1b8c0ceae246] Running
	I0803 23:11:23.553700   28167 system_pods.go:89] "kube-controller-manager-ha-076508-m02" [f13130bb-619b-475f-ab9d-61422ca1a08b] Running
	I0803 23:11:23.553705   28167 system_pods.go:89] "kube-controller-manager-ha-076508-m03" [108437fc-1c9a-4729-9d08-ebaf35e67bad] Running
	I0803 23:11:23.553709   28167 system_pods.go:89] "kube-proxy-7kmfh" [5bc5276d-480b-4c95-b6c2-0cbb2898d290] Running
	I0803 23:11:23.553712   28167 system_pods.go:89] "kube-proxy-hkfgl" [31dca27d-663b-4bfa-8921-547686985835] Running
	I0803 23:11:23.553717   28167 system_pods.go:89] "kube-proxy-jvj96" [cdb6273b-31a8-48bc-8c5a-010363fc2a96] Running
	I0803 23:11:23.553722   28167 system_pods.go:89] "kube-scheduler-ha-076508" [63e9b52f-c7e8-4812-a666-284b2d383067] Running
	I0803 23:11:23.553728   28167 system_pods.go:89] "kube-scheduler-ha-076508-m02" [47cb368b-42e7-44f0-b1b1-40521064569b] Running
	I0803 23:11:23.553732   28167 system_pods.go:89] "kube-scheduler-ha-076508-m03" [ead599ec-1d46-4457-850d-d189b57597c5] Running
	I0803 23:11:23.553738   28167 system_pods.go:89] "kube-vip-ha-076508" [f0640d14-d8df-4fe5-8265-4f1215c2e881] Running
	I0803 23:11:23.553741   28167 system_pods.go:89] "kube-vip-ha-076508-m02" [0e1a3c8d-c1d4-4c29-b674-f13a62d2471c] Running
	I0803 23:11:23.553747   28167 system_pods.go:89] "kube-vip-ha-076508-m03" [61ffbdc1-4caa-450c-8c00-29bca8fccd59] Running
	I0803 23:11:23.553750   28167 system_pods.go:89] "storage-provisioner" [c98f9062-eff5-48e1-b260-7e8acf8df124] Running
	I0803 23:11:23.553756   28167 system_pods.go:126] duration metric: took 209.840827ms to wait for k8s-apps to be running ...
	I0803 23:11:23.553766   28167 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 23:11:23.553809   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:11:23.572164   28167 system_svc.go:56] duration metric: took 18.390119ms WaitForService to wait for kubelet
	I0803 23:11:23.572192   28167 kubeadm.go:582] duration metric: took 24.669558424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:11:23.572209   28167 node_conditions.go:102] verifying NodePressure condition ...
	I0803 23:11:23.739507   28167 request.go:629] Waited for 167.239058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes
	I0803 23:11:23.739574   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes
	I0803 23:11:23.739579   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:23.739587   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:23.739594   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:23.743056   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:23.743986   28167 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:11:23.744014   28167 node_conditions.go:123] node cpu capacity is 2
	I0803 23:11:23.744032   28167 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:11:23.744043   28167 node_conditions.go:123] node cpu capacity is 2
	I0803 23:11:23.744067   28167 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:11:23.744076   28167 node_conditions.go:123] node cpu capacity is 2
	I0803 23:11:23.744082   28167 node_conditions.go:105] duration metric: took 171.868142ms to run NodePressure ...
	I0803 23:11:23.744097   28167 start.go:241] waiting for startup goroutines ...
	I0803 23:11:23.744125   28167 start.go:255] writing updated cluster config ...
	I0803 23:11:23.744410   28167 ssh_runner.go:195] Run: rm -f paused
	I0803 23:11:23.794827   28167 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0803 23:11:23.796773   28167 out.go:177] * Done! kubectl is now configured to use "ha-076508" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.235159617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722726904235137162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd30e1f4-f002-46aa-9a6a-6cac36a943f4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.235966912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70162c40-d7bb-463a-b7e8-ae86a6cc5ff2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.236026714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70162c40-d7bb-463a-b7e8-ae86a6cc5ff2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.236353504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722726687649144408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7c5e8e3bdac4eb3896e0799c1baf348b250f64611d70ada7c8a6b0877f753d,PodSandboxId:4047efed84d9c767349916183cb26ca9a0f6177b610811509eedaf85daacdadb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722726481986802246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482042517794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482018555464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b
05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722726470071171994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272646
4614751464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05a03627874a6aa94e9d20285c30c669224806570e94d22c65230790534d31e,PodSandboxId:4cd54ec3ddfec1ec9679b6a71cae1d9755b622cbd8f43376be77d700cb2eecc1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227264476
76391010,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a81668b751d16a05138cc2a943d8c72,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722726444756105758,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8,PodSandboxId:13cab867b36e126d02466fdbfb23ca5a1449155a529a2f55e75ba9e2580e9b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722726444760056725,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a,PodSandboxId:b468de169be0bee0efb7fd5d17ff307a33863a36c0fa53cb3e64ad2cff2b6c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722726444692902509,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722726444585984884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70162c40-d7bb-463a-b7e8-ae86a6cc5ff2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.278371519Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25cc0ef0-6519-4366-b623-3be3c976d153 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.278455111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25cc0ef0-6519-4366-b623-3be3c976d153 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.279921914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=342a67fc-0e41-476f-8759-617f6b1ecd54 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.280493183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722726904280465660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=342a67fc-0e41-476f-8759-617f6b1ecd54 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.281120496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fd0c4f5-4346-4609-b641-599f72e108ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.281178854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fd0c4f5-4346-4609-b641-599f72e108ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.281472496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722726687649144408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7c5e8e3bdac4eb3896e0799c1baf348b250f64611d70ada7c8a6b0877f753d,PodSandboxId:4047efed84d9c767349916183cb26ca9a0f6177b610811509eedaf85daacdadb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722726481986802246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482042517794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482018555464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b
05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722726470071171994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272646
4614751464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05a03627874a6aa94e9d20285c30c669224806570e94d22c65230790534d31e,PodSandboxId:4cd54ec3ddfec1ec9679b6a71cae1d9755b622cbd8f43376be77d700cb2eecc1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227264476
76391010,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a81668b751d16a05138cc2a943d8c72,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722726444756105758,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8,PodSandboxId:13cab867b36e126d02466fdbfb23ca5a1449155a529a2f55e75ba9e2580e9b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722726444760056725,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a,PodSandboxId:b468de169be0bee0efb7fd5d17ff307a33863a36c0fa53cb3e64ad2cff2b6c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722726444692902509,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722726444585984884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fd0c4f5-4346-4609-b641-599f72e108ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.332363500Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f386420-08b9-46a5-bc82-7e0cb0132329 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.332500666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f386420-08b9-46a5-bc82-7e0cb0132329 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.334597419Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f016eac1-be24-43e0-8476-5fa6447b9ffe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.335050790Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722726904335027258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f016eac1-be24-43e0-8476-5fa6447b9ffe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.335812049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15a2485f-ed9d-44aa-a6a5-ec4e0d7aa061 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.335892485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15a2485f-ed9d-44aa-a6a5-ec4e0d7aa061 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.336145575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722726687649144408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7c5e8e3bdac4eb3896e0799c1baf348b250f64611d70ada7c8a6b0877f753d,PodSandboxId:4047efed84d9c767349916183cb26ca9a0f6177b610811509eedaf85daacdadb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722726481986802246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482042517794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482018555464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b
05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722726470071171994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272646
4614751464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05a03627874a6aa94e9d20285c30c669224806570e94d22c65230790534d31e,PodSandboxId:4cd54ec3ddfec1ec9679b6a71cae1d9755b622cbd8f43376be77d700cb2eecc1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227264476
76391010,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a81668b751d16a05138cc2a943d8c72,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722726444756105758,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8,PodSandboxId:13cab867b36e126d02466fdbfb23ca5a1449155a529a2f55e75ba9e2580e9b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722726444760056725,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a,PodSandboxId:b468de169be0bee0efb7fd5d17ff307a33863a36c0fa53cb3e64ad2cff2b6c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722726444692902509,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722726444585984884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15a2485f-ed9d-44aa-a6a5-ec4e0d7aa061 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.382266044Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5e450b2-1d08-4af8-80e7-ecc210c5fada name=/runtime.v1.RuntimeService/Version
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.382433940Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5e450b2-1d08-4af8-80e7-ecc210c5fada name=/runtime.v1.RuntimeService/Version
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.384387210Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f3bfd20-c35d-402f-961c-8d5d2770bf59 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.385075724Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722726904385033777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f3bfd20-c35d-402f-961c-8d5d2770bf59 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.385783808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27eab506-f09b-4afd-8ea9-483588d54800 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.385857452Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27eab506-f09b-4afd-8ea9-483588d54800 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:15:04 ha-076508 crio[678]: time="2024-08-03 23:15:04.386372239Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722726687649144408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7c5e8e3bdac4eb3896e0799c1baf348b250f64611d70ada7c8a6b0877f753d,PodSandboxId:4047efed84d9c767349916183cb26ca9a0f6177b610811509eedaf85daacdadb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722726481986802246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482042517794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482018555464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b
05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722726470071171994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272646
4614751464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05a03627874a6aa94e9d20285c30c669224806570e94d22c65230790534d31e,PodSandboxId:4cd54ec3ddfec1ec9679b6a71cae1d9755b622cbd8f43376be77d700cb2eecc1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227264476
76391010,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a81668b751d16a05138cc2a943d8c72,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722726444756105758,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8,PodSandboxId:13cab867b36e126d02466fdbfb23ca5a1449155a529a2f55e75ba9e2580e9b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722726444760056725,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a,PodSandboxId:b468de169be0bee0efb7fd5d17ff307a33863a36c0fa53cb3e64ad2cff2b6c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722726444692902509,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722726444585984884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27eab506-f09b-4afd-8ea9-483588d54800 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bf2cd88f9d490       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   5999015810d66       busybox-fc5497c4f-9mswn
	e4d2591ba7d5b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   ce24a7aa66e68       coredns-7db6d8ff4d-g4nns
	06304cb4cc30c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   b802406e46b4c       coredns-7db6d8ff4d-jm52b
	6f7c5e8e3bdac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   4047efed84d9c       storage-provisioner
	992a3ac9b52e9       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   f61ecf195fc7f       kindnet-bpdht
	c3100c43f706e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   9f02c76f5b54a       kube-proxy-jvj96
	d05a03627874a       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   4cd54ec3ddfec       kube-vip-ha-076508
	1e30a0cbac1a3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   13cab867b36e1       kube-controller-manager-ha-076508
	94ea41effc5da       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   893b2ee90e13f       kube-scheduler-ha-076508
	4ce5fe2a1f3aa       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   b468de169be0b       kube-apiserver-ha-076508
	f127531f146d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   bf23341fb90df       etcd-ha-076508
	
	
	==> coredns [06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff] <==
	[INFO] 10.244.0.4:41384 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219271s
	[INFO] 10.244.0.4:40191 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230574s
	[INFO] 10.244.0.4:59881 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008692718s
	[INFO] 10.244.0.4:47621 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107904s
	[INFO] 10.244.0.4:38085 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119908s
	[INFO] 10.244.2.2:54633 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00219105s
	[INFO] 10.244.2.2:54240 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087449s
	[INFO] 10.244.1.2:44472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182942s
	[INFO] 10.244.1.2:54284 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00020417s
	[INFO] 10.244.1.2:35720 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113124s
	[INFO] 10.244.1.2:49197 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125884s
	[INFO] 10.244.1.2:42019 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010825s
	[INFO] 10.244.1.2:36505 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000274487s
	[INFO] 10.244.0.4:53634 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092514s
	[INFO] 10.244.0.4:37869 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148859s
	[INFO] 10.244.0.4:34409 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007386s
	[INFO] 10.244.2.2:37127 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00023425s
	[INFO] 10.244.1.2:45090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198771s
	[INFO] 10.244.1.2:35116 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097607s
	[INFO] 10.244.0.4:54156 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000252361s
	[INFO] 10.244.0.4:56228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118127s
	[INFO] 10.244.2.2:40085 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113887s
	[INFO] 10.244.2.2:41147 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160253s
	[INFO] 10.244.1.2:34773 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000224176s
	[INFO] 10.244.1.2:41590 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094468s
	
	
	==> coredns [e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac] <==
	[INFO] 10.244.2.2:40415 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000128857s
	[INFO] 10.244.2.2:55624 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001648548s
	[INFO] 10.244.1.2:49499 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129181s
	[INFO] 10.244.0.4:35373 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087731s
	[INFO] 10.244.0.4:34194 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127612s
	[INFO] 10.244.2.2:55281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134091s
	[INFO] 10.244.2.2:54805 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174016s
	[INFO] 10.244.2.2:57182 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169629s
	[INFO] 10.244.2.2:60918 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001430378s
	[INFO] 10.244.2.2:56177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124595s
	[INFO] 10.244.2.2:37833 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009163s
	[INFO] 10.244.1.2:59379 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001877956s
	[INFO] 10.244.1.2:55115 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00153877s
	[INFO] 10.244.0.4:60770 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073925s
	[INFO] 10.244.2.2:35733 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000194756s
	[INFO] 10.244.2.2:41572 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113187s
	[INFO] 10.244.2.2:56390 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000128625s
	[INFO] 10.244.1.2:57417 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136164s
	[INFO] 10.244.1.2:45630 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074866s
	[INFO] 10.244.0.4:56762 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117476s
	[INFO] 10.244.0.4:47543 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000144888s
	[INFO] 10.244.2.2:48453 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203019s
	[INFO] 10.244.2.2:47323 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155497s
	[INFO] 10.244.1.2:55651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193064s
	[INFO] 10.244.1.2:54565 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106172s
	
	
	==> describe nodes <==
	Name:               ha-076508
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_07_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:15:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:11:35 +0000   Sat, 03 Aug 2024 23:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:11:35 +0000   Sat, 03 Aug 2024 23:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:11:35 +0000   Sat, 03 Aug 2024 23:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:11:35 +0000   Sat, 03 Aug 2024 23:08:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.154
	  Hostname:    ha-076508
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f520408175b740ceb19f810f6b0739d9
	  System UUID:                f5204081-75b7-40ce-b19f-810f6b0739d9
	  Boot ID:                    1b5fc419-04f3-4085-a948-6aee54d39a0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9mswn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 coredns-7db6d8ff4d-g4nns             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m21s
	  kube-system                 coredns-7db6d8ff4d-jm52b             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m21s
	  kube-system                 etcd-ha-076508                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m34s
	  kube-system                 kindnet-bpdht                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m21s
	  kube-system                 kube-apiserver-ha-076508             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                 kube-controller-manager-ha-076508    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                 kube-proxy-jvj96                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 kube-scheduler-ha-076508             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                 kube-vip-ha-076508                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m19s  kube-proxy       
	  Normal  Starting                 7m34s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m34s  kubelet          Node ha-076508 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m34s  kubelet          Node ha-076508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m34s  kubelet          Node ha-076508 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m22s  node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal  NodeReady                7m3s   kubelet          Node ha-076508 status is now: NodeReady
	  Normal  RegisteredNode           5m5s   node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal  RegisteredNode           3m51s  node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	
	
	Name:               ha-076508-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_09_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:09:41 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:12:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 03 Aug 2024 23:11:44 +0000   Sat, 03 Aug 2024 23:13:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 03 Aug 2024 23:11:44 +0000   Sat, 03 Aug 2024 23:13:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 03 Aug 2024 23:11:44 +0000   Sat, 03 Aug 2024 23:13:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 03 Aug 2024 23:11:44 +0000   Sat, 03 Aug 2024 23:13:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-076508-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e37b92099f364fcfb7894de373a13dc0
	  System UUID:                e37b9209-9f36-4fcf-b789-4de373a13dc0
	  Boot ID:                    ce951a70-7d26-44f7-b876-80429f6067a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wlr2g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 etcd-ha-076508-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m21s
	  kube-system                 kindnet-kw254                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m23s
	  kube-system                 kube-apiserver-ha-076508-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-controller-manager-ha-076508-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-proxy-hkfgl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-scheduler-ha-076508-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-vip-ha-076508-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m23s (x8 over 5m23s)  kubelet          Node ha-076508-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m23s (x8 over 5m23s)  kubelet          Node ha-076508-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m23s (x7 over 5m23s)  kubelet          Node ha-076508-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-076508-m02 status is now: NodeNotReady
	
	
	Name:               ha-076508-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_10_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:10:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:15:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:11:57 +0000   Sat, 03 Aug 2024 23:10:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:11:57 +0000   Sat, 03 Aug 2024 23:10:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:11:57 +0000   Sat, 03 Aug 2024 23:10:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:11:57 +0000   Sat, 03 Aug 2024 23:11:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    ha-076508-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad0c4ebbd959429f966b637eb26caf62
	  System UUID:                ad0c4ebb-d959-429f-966b-637eb26caf62
	  Boot ID:                    48d495ed-b4cd-49d1-87cd-cac9c1cc8ea9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nfwfw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 etcd-ha-076508-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m7s
	  kube-system                 kindnet-tzzq4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m9s
	  kube-system                 kube-apiserver-ha-076508-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-controller-manager-ha-076508-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-proxy-7kmfh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-scheduler-ha-076508-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-vip-ha-076508-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node ha-076508-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node ha-076508-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node ha-076508-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-076508-m03 event: Registered Node ha-076508-m03 in Controller
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-076508-m03 event: Registered Node ha-076508-m03 in Controller
	  Normal  RegisteredNode           3m51s                node-controller  Node ha-076508-m03 event: Registered Node ha-076508-m03 in Controller
	
	
	Name:               ha-076508-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_12_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:14:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:12:33 +0000   Sat, 03 Aug 2024 23:12:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:12:33 +0000   Sat, 03 Aug 2024 23:12:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:12:33 +0000   Sat, 03 Aug 2024 23:12:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:12:33 +0000   Sat, 03 Aug 2024 23:12:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-076508-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 59e0fe8296564277a8f997ffad0b72b7
	  System UUID:                59e0fe82-9656-4277-a8f9-97ffad0b72b7
	  Boot ID:                    1e39986a-cb9b-4675-9bbc-a7bb913ff696
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hdkw5       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m2s
	  kube-system                 kube-proxy-ff944    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-076508-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-076508-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-076508-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-076508-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 3 23:06] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050902] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041428] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.797018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.674662] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Aug 3 23:07] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.547215] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.057969] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056174] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.182365] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.110609] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.279600] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.413542] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.061522] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.061905] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +1.335796] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.036158] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.075573] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.924842] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.636926] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 3 23:09] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e] <==
	{"level":"warn","ts":"2024-08-03T23:15:04.67839Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.684173Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.700486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.708436Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.715977Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.721027Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.724383Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.734053Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.740725Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.747702Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.751948Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.755551Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.761143Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.765108Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.772666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.780508Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.787629Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.792258Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.803628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.810498Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.81749Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.852529Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.854744Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.861612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:15:04.884974Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:15:04 up 8 min,  0 users,  load average: 0.31, 0.29, 0.14
	Linux ha-076508 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18] <==
	I0803 23:14:31.279100       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:14:41.276741       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:14:41.276871       1 main.go:299] handling current node
	I0803 23:14:41.276917       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:14:41.276987       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:14:41.277445       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:14:41.277574       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:14:41.277692       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:14:41.277717       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:14:51.271019       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:14:51.271138       1 main.go:299] handling current node
	I0803 23:14:51.271169       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:14:51.271190       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:14:51.271476       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:14:51.271515       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:14:51.271649       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:14:51.271675       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:15:01.279212       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:15:01.279250       1 main.go:299] handling current node
	I0803 23:15:01.279268       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:15:01.279324       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:15:01.279510       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:15:01.279547       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:15:01.279701       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:15:01.279735       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a] <==
	E0803 23:07:30.639758       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0803 23:07:30.640974       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0803 23:07:30.641029       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0803 23:07:30.642230       1 timeout.go:142] post-timeout activity - time-elapsed: 2.470085ms, POST "/api/v1/namespaces/kube-system/pods" result: <nil>
	I0803 23:07:30.901565       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0803 23:07:30.933757       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0803 23:07:30.947671       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0803 23:07:43.479194       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0803 23:07:43.763026       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0803 23:11:29.000941       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58346: use of closed network connection
	E0803 23:11:29.187412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58360: use of closed network connection
	E0803 23:11:29.383828       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58380: use of closed network connection
	E0803 23:11:29.600601       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58396: use of closed network connection
	E0803 23:11:29.788511       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58416: use of closed network connection
	E0803 23:11:30.003088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58426: use of closed network connection
	E0803 23:11:30.184249       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58444: use of closed network connection
	E0803 23:11:30.366155       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58456: use of closed network connection
	E0803 23:11:30.541261       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58472: use of closed network connection
	E0803 23:11:30.860021       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58498: use of closed network connection
	E0803 23:11:31.087099       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58514: use of closed network connection
	E0803 23:11:31.263223       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58534: use of closed network connection
	E0803 23:11:31.445252       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58552: use of closed network connection
	E0803 23:11:31.626787       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58566: use of closed network connection
	E0803 23:11:31.816995       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58588: use of closed network connection
	W0803 23:12:59.473673       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.154 192.168.39.86]
	
	
	==> kube-controller-manager [1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8] <==
	I0803 23:10:55.447683       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-076508-m03" podCIDRs=["10.244.2.0/24"]
	I0803 23:10:57.878648       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-076508-m03"
	I0803 23:11:24.710531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.559026ms"
	I0803 23:11:24.810991       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.231938ms"
	I0803 23:11:24.941532       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="130.410072ms"
	I0803 23:11:25.185637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.040499ms"
	I0803 23:11:25.226083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.372653ms"
	I0803 23:11:25.227013       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="460.915µs"
	I0803 23:11:25.266351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.953994ms"
	I0803 23:11:25.267357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="172.453µs"
	I0803 23:11:25.363589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="185.283µs"
	I0803 23:11:27.823545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.623105ms"
	I0803 23:11:27.823752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.914µs"
	I0803 23:11:28.300364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.189063ms"
	I0803 23:11:28.300760       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="305.671µs"
	I0803 23:11:28.543884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.853725ms"
	I0803 23:11:28.544416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.528µs"
	E0803 23:12:02.108031       1 certificate_controller.go:146] Sync csr-ccvdl failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-ccvdl": the object has been modified; please apply your changes to the latest version and try again
	I0803 23:12:02.399995       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-076508-m04\" does not exist"
	I0803 23:12:02.461701       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-076508-m04" podCIDRs=["10.244.3.0/24"]
	I0803 23:12:02.908837       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-076508-m04"
	I0803 23:12:23.393572       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076508-m04"
	I0803 23:13:17.940876       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076508-m04"
	I0803 23:13:17.983262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.178854ms"
	I0803 23:13:17.983744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.39µs"
	
	
	==> kube-proxy [c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa] <==
	I0803 23:07:44.832758       1 server_linux.go:69] "Using iptables proxy"
	I0803 23:07:44.852587       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
	I0803 23:07:44.934096       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 23:07:44.934142       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 23:07:44.934159       1 server_linux.go:165] "Using iptables Proxier"
	I0803 23:07:44.937787       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 23:07:44.938109       1 server.go:872] "Version info" version="v1.30.3"
	I0803 23:07:44.938153       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:07:44.940042       1 config.go:192] "Starting service config controller"
	I0803 23:07:44.940395       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 23:07:44.940465       1 config.go:101] "Starting endpoint slice config controller"
	I0803 23:07:44.940485       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 23:07:44.941457       1 config.go:319] "Starting node config controller"
	I0803 23:07:44.942631       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 23:07:45.041527       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 23:07:45.041552       1 shared_informer.go:320] Caches are synced for service config
	I0803 23:07:45.043109       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf] <==
	W0803 23:07:28.671970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:07:28.672088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0803 23:07:28.780566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 23:07:28.780614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0803 23:07:28.783590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0803 23:07:28.783671       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0803 23:07:28.807701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0803 23:07:28.807746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0803 23:07:28.893343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0803 23:07:28.893449       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0803 23:07:29.242730       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:07:29.243403       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0803 23:07:32.091551       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0803 23:11:24.672112       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-nfwfw\": pod busybox-fc5497c4f-nfwfw is already assigned to node \"ha-076508-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-nfwfw" node="ha-076508-m03"
	E0803 23:11:24.672328       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a132b6e0-614a-4aaa-b1f6-b11bdf6a0fc0(default/busybox-fc5497c4f-nfwfw) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-nfwfw"
	E0803 23:11:24.672373       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-nfwfw\": pod busybox-fc5497c4f-nfwfw is already assigned to node \"ha-076508-m03\"" pod="default/busybox-fc5497c4f-nfwfw"
	I0803 23:11:24.672440       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-nfwfw" node="ha-076508-m03"
	E0803 23:11:24.708174       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wlr2g\": pod busybox-fc5497c4f-wlr2g is already assigned to node \"ha-076508-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wlr2g" node="ha-076508-m02"
	E0803 23:11:24.717435       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5cc9bc14-7454-4e5b-9dfc-c7702f42323b(default/busybox-fc5497c4f-wlr2g) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wlr2g"
	E0803 23:11:24.725004       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wlr2g\": pod busybox-fc5497c4f-wlr2g is already assigned to node \"ha-076508-m02\"" pod="default/busybox-fc5497c4f-wlr2g"
	I0803 23:11:24.725485       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wlr2g" node="ha-076508-m02"
	E0803 23:12:02.482595       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ksmxp\": pod kindnet-ksmxp is already assigned to node \"ha-076508-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ksmxp" node="ha-076508-m04"
	E0803 23:12:02.482703       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 415811c8-0f4b-44c3-954e-8e56747d8462(kube-system/kindnet-ksmxp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ksmxp"
	E0803 23:12:02.482727       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ksmxp\": pod kindnet-ksmxp is already assigned to node \"ha-076508-m04\"" pod="kube-system/kindnet-ksmxp"
	I0803 23:12:02.482788       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ksmxp" node="ha-076508-m04"
	
	
	==> kubelet <==
	Aug 03 23:10:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:10:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:11:24 ha-076508 kubelet[1368]: I0803 23:11:24.712478    1368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jm52b" podStartSLOduration=221.71242241 podStartE2EDuration="3m41.71242241s" podCreationTimestamp="2024-08-03 23:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-03 23:08:03.054576597 +0000 UTC m=+32.387083358" watchObservedRunningTime="2024-08-03 23:11:24.71242241 +0000 UTC m=+234.044929145"
	Aug 03 23:11:24 ha-076508 kubelet[1368]: I0803 23:11:24.713848    1368 topology_manager.go:215] "Topology Admit Handler" podUID="bb1d5016-7a80-440d-8d04-9c51a1c84199" podNamespace="default" podName="busybox-fc5497c4f-9mswn"
	Aug 03 23:11:24 ha-076508 kubelet[1368]: I0803 23:11:24.798895    1368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdhwr\" (UniqueName: \"kubernetes.io/projected/bb1d5016-7a80-440d-8d04-9c51a1c84199-kube-api-access-pdhwr\") pod \"busybox-fc5497c4f-9mswn\" (UID: \"bb1d5016-7a80-440d-8d04-9c51a1c84199\") " pod="default/busybox-fc5497c4f-9mswn"
	Aug 03 23:11:30 ha-076508 kubelet[1368]: E0803 23:11:30.852522    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:11:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:11:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:11:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:11:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:12:30 ha-076508 kubelet[1368]: E0803 23:12:30.836936    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:12:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:12:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:12:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:12:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:13:30 ha-076508 kubelet[1368]: E0803 23:13:30.841062    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:13:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:13:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:13:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:13:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:14:30 ha-076508 kubelet[1368]: E0803 23:14:30.838655    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:14:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:14:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:14:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:14:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-076508 -n ha-076508
helpers_test.go:261: (dbg) Run:  kubectl --context ha-076508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (59.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr: exit status 3 (3.199880172s)

                                                
                                                
-- stdout --
	ha-076508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-076508-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:15:09.419215   33308 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:15:09.419475   33308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:09.419484   33308 out.go:304] Setting ErrFile to fd 2...
	I0803 23:15:09.419489   33308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:09.419678   33308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:15:09.419846   33308 out.go:298] Setting JSON to false
	I0803 23:15:09.419872   33308 mustload.go:65] Loading cluster: ha-076508
	I0803 23:15:09.419910   33308 notify.go:220] Checking for updates...
	I0803 23:15:09.420397   33308 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:15:09.420417   33308 status.go:255] checking status of ha-076508 ...
	I0803 23:15:09.420833   33308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:09.420888   33308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:09.441184   33308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34243
	I0803 23:15:09.441833   33308 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:09.442432   33308 main.go:141] libmachine: Using API Version  1
	I0803 23:15:09.442449   33308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:09.442866   33308 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:09.443057   33308 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:15:09.444787   33308 status.go:330] ha-076508 host status = "Running" (err=<nil>)
	I0803 23:15:09.444808   33308 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:09.445097   33308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:09.445135   33308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:09.462304   33308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0803 23:15:09.462685   33308 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:09.463207   33308 main.go:141] libmachine: Using API Version  1
	I0803 23:15:09.463242   33308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:09.463595   33308 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:09.463772   33308 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:15:09.466674   33308 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:09.467064   33308 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:09.467097   33308 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:09.467230   33308 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:09.467603   33308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:09.467649   33308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:09.482429   33308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I0803 23:15:09.482877   33308 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:09.483309   33308 main.go:141] libmachine: Using API Version  1
	I0803 23:15:09.483327   33308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:09.483662   33308 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:09.483869   33308 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:15:09.484040   33308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:09.484076   33308 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:15:09.487150   33308 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:09.487539   33308 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:09.487564   33308 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:09.487728   33308 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:15:09.487931   33308 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:15:09.488116   33308 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:15:09.488269   33308 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:15:09.573133   33308 ssh_runner.go:195] Run: systemctl --version
	I0803 23:15:09.579584   33308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:09.597017   33308 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:09.597046   33308 api_server.go:166] Checking apiserver status ...
	I0803 23:15:09.597079   33308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:09.611223   33308 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W0803 23:15:09.620697   33308 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:09.620764   33308 ssh_runner.go:195] Run: ls
	I0803 23:15:09.625388   33308 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:09.632814   33308 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:09.632839   33308 status.go:422] ha-076508 apiserver status = Running (err=<nil>)
	I0803 23:15:09.632865   33308 status.go:257] ha-076508 status: &{Name:ha-076508 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:09.632890   33308 status.go:255] checking status of ha-076508-m02 ...
	I0803 23:15:09.633191   33308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:09.633232   33308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:09.648030   33308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43079
	I0803 23:15:09.648470   33308 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:09.648936   33308 main.go:141] libmachine: Using API Version  1
	I0803 23:15:09.648960   33308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:09.649294   33308 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:09.649506   33308 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:15:09.651153   33308 status.go:330] ha-076508-m02 host status = "Running" (err=<nil>)
	I0803 23:15:09.651179   33308 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:15:09.651529   33308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:09.651576   33308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:09.666234   33308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I0803 23:15:09.666650   33308 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:09.667170   33308 main.go:141] libmachine: Using API Version  1
	I0803 23:15:09.667189   33308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:09.667487   33308 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:09.667673   33308 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:15:09.670405   33308 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:09.670842   33308 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:15:09.670862   33308 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:09.671078   33308 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:15:09.671359   33308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:09.671392   33308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:09.686478   33308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I0803 23:15:09.686839   33308 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:09.687282   33308 main.go:141] libmachine: Using API Version  1
	I0803 23:15:09.687306   33308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:09.687583   33308 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:09.687786   33308 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:15:09.687991   33308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:09.688009   33308 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:15:09.690879   33308 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:09.691306   33308 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:15:09.691334   33308 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:09.691586   33308 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:15:09.691750   33308 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:15:09.691927   33308 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:15:09.692127   33308 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	W0803 23:15:12.225619   33308 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0803 23:15:12.225728   33308 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0803 23:15:12.225742   33308 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:12.225749   33308 status.go:257] ha-076508-m02 status: &{Name:ha-076508-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:15:12.225765   33308 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:12.225773   33308 status.go:255] checking status of ha-076508-m03 ...
	I0803 23:15:12.226065   33308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:12.226111   33308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:12.240536   33308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0803 23:15:12.241048   33308 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:12.241741   33308 main.go:141] libmachine: Using API Version  1
	I0803 23:15:12.241773   33308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:12.242153   33308 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:12.242338   33308 main.go:141] libmachine: (ha-076508-m03) Calling .GetState
	I0803 23:15:12.244297   33308 status.go:330] ha-076508-m03 host status = "Running" (err=<nil>)
	I0803 23:15:12.244312   33308 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:12.244770   33308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:12.244853   33308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:12.260159   33308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33999
	I0803 23:15:12.260575   33308 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:12.261065   33308 main.go:141] libmachine: Using API Version  1
	I0803 23:15:12.261096   33308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:12.261497   33308 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:12.261684   33308 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:15:12.264783   33308 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:12.265221   33308 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:12.265258   33308 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:12.265349   33308 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:12.265669   33308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:12.265702   33308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:12.280520   33308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
	I0803 23:15:12.281126   33308 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:12.281717   33308 main.go:141] libmachine: Using API Version  1
	I0803 23:15:12.281747   33308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:12.282175   33308 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:12.282371   33308 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:15:12.282588   33308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:12.282612   33308 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:15:12.286018   33308 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:12.286493   33308 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:12.286515   33308 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:12.286839   33308 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:15:12.287059   33308 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:15:12.287273   33308 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:15:12.287439   33308 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:15:12.364889   33308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:12.381333   33308 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:12.381376   33308 api_server.go:166] Checking apiserver status ...
	I0803 23:15:12.381428   33308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:12.395357   33308 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup
	W0803 23:15:12.405667   33308 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:12.405714   33308 ssh_runner.go:195] Run: ls
	I0803 23:15:12.410727   33308 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:12.414945   33308 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:12.414968   33308 status.go:422] ha-076508-m03 apiserver status = Running (err=<nil>)
	I0803 23:15:12.414978   33308 status.go:257] ha-076508-m03 status: &{Name:ha-076508-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:12.414995   33308 status.go:255] checking status of ha-076508-m04 ...
	I0803 23:15:12.415326   33308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:12.415360   33308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:12.429860   33308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42259
	I0803 23:15:12.430252   33308 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:12.430759   33308 main.go:141] libmachine: Using API Version  1
	I0803 23:15:12.430783   33308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:12.431149   33308 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:12.431382   33308 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:15:12.432852   33308 status.go:330] ha-076508-m04 host status = "Running" (err=<nil>)
	I0803 23:15:12.432870   33308 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:12.433313   33308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:12.433380   33308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:12.448593   33308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38543
	I0803 23:15:12.449072   33308 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:12.449663   33308 main.go:141] libmachine: Using API Version  1
	I0803 23:15:12.449691   33308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:12.450089   33308 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:12.450331   33308 main.go:141] libmachine: (ha-076508-m04) Calling .GetIP
	I0803 23:15:12.453111   33308 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:12.453560   33308 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:12.453585   33308 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:12.453749   33308 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:12.454042   33308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:12.454092   33308 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:12.469170   33308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I0803 23:15:12.469572   33308 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:12.470045   33308 main.go:141] libmachine: Using API Version  1
	I0803 23:15:12.470065   33308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:12.470402   33308 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:12.470569   33308 main.go:141] libmachine: (ha-076508-m04) Calling .DriverName
	I0803 23:15:12.470723   33308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:12.470744   33308 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHHostname
	I0803 23:15:12.473308   33308 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:12.473760   33308 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:12.473799   33308 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:12.474051   33308 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHPort
	I0803 23:15:12.474238   33308 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHKeyPath
	I0803 23:15:12.474402   33308 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHUsername
	I0803 23:15:12.474524   33308 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m04/id_rsa Username:docker}
	I0803 23:15:12.561174   33308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:12.578611   33308 status.go:257] ha-076508-m04 status: &{Name:ha-076508-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr: exit status 3 (5.505316101s)

                                                
                                                
-- stdout --
	ha-076508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-076508-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:15:13.262835   33408 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:15:13.262985   33408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:13.262997   33408 out.go:304] Setting ErrFile to fd 2...
	I0803 23:15:13.263004   33408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:13.263256   33408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:15:13.263481   33408 out.go:298] Setting JSON to false
	I0803 23:15:13.263511   33408 mustload.go:65] Loading cluster: ha-076508
	I0803 23:15:13.263837   33408 notify.go:220] Checking for updates...
	I0803 23:15:13.264753   33408 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:15:13.264792   33408 status.go:255] checking status of ha-076508 ...
	I0803 23:15:13.265932   33408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:13.266009   33408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:13.281366   33408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0803 23:15:13.281844   33408 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:13.282399   33408 main.go:141] libmachine: Using API Version  1
	I0803 23:15:13.282423   33408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:13.282846   33408 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:13.283048   33408 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:15:13.284726   33408 status.go:330] ha-076508 host status = "Running" (err=<nil>)
	I0803 23:15:13.284748   33408 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:13.285037   33408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:13.285071   33408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:13.301935   33408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
	I0803 23:15:13.302300   33408 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:13.302898   33408 main.go:141] libmachine: Using API Version  1
	I0803 23:15:13.302927   33408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:13.303348   33408 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:13.303562   33408 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:15:13.306264   33408 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:13.306784   33408 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:13.306820   33408 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:13.306882   33408 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:13.307168   33408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:13.307230   33408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:13.321804   33408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0803 23:15:13.322166   33408 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:13.322601   33408 main.go:141] libmachine: Using API Version  1
	I0803 23:15:13.322620   33408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:13.322897   33408 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:13.323092   33408 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:15:13.323308   33408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:13.323330   33408 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:15:13.326110   33408 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:13.326520   33408 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:13.326543   33408 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:13.326691   33408 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:15:13.326895   33408 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:15:13.327035   33408 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:15:13.327197   33408 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:15:13.413905   33408 ssh_runner.go:195] Run: systemctl --version
	I0803 23:15:13.420809   33408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:13.438614   33408 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:13.438638   33408 api_server.go:166] Checking apiserver status ...
	I0803 23:15:13.438679   33408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:13.456281   33408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W0803 23:15:13.469688   33408 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:13.469774   33408 ssh_runner.go:195] Run: ls
	I0803 23:15:13.474736   33408 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:13.482205   33408 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:13.482235   33408 status.go:422] ha-076508 apiserver status = Running (err=<nil>)
	I0803 23:15:13.482246   33408 status.go:257] ha-076508 status: &{Name:ha-076508 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:13.482263   33408 status.go:255] checking status of ha-076508-m02 ...
	I0803 23:15:13.482554   33408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:13.482588   33408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:13.498276   33408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44175
	I0803 23:15:13.498680   33408 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:13.499195   33408 main.go:141] libmachine: Using API Version  1
	I0803 23:15:13.499215   33408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:13.499567   33408 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:13.499764   33408 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:15:13.501235   33408 status.go:330] ha-076508-m02 host status = "Running" (err=<nil>)
	I0803 23:15:13.501252   33408 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:15:13.501585   33408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:13.501619   33408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:13.516672   33408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35545
	I0803 23:15:13.517072   33408 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:13.517602   33408 main.go:141] libmachine: Using API Version  1
	I0803 23:15:13.517622   33408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:13.517929   33408 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:13.518114   33408 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:15:13.520745   33408 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:13.521287   33408 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:15:13.521310   33408 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:13.521460   33408 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:15:13.521764   33408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:13.521806   33408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:13.536686   33408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36135
	I0803 23:15:13.537088   33408 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:13.537585   33408 main.go:141] libmachine: Using API Version  1
	I0803 23:15:13.537609   33408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:13.537911   33408 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:13.538126   33408 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:15:13.538408   33408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:13.538433   33408 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:15:13.540986   33408 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:13.541486   33408 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:15:13.541519   33408 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:13.541707   33408 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:15:13.541896   33408 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:15:13.542053   33408 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:15:13.542181   33408 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	W0803 23:15:15.297686   33408 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:15.297739   33408 retry.go:31] will retry after 224.900708ms: dial tcp 192.168.39.245:22: connect: no route to host
	W0803 23:15:18.369672   33408 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0803 23:15:18.369744   33408 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0803 23:15:18.369761   33408 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:18.369771   33408 status.go:257] ha-076508-m02 status: &{Name:ha-076508-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:15:18.369810   33408 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:18.369817   33408 status.go:255] checking status of ha-076508-m03 ...
	I0803 23:15:18.370143   33408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:18.370185   33408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:18.385438   33408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36201
	I0803 23:15:18.385892   33408 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:18.386419   33408 main.go:141] libmachine: Using API Version  1
	I0803 23:15:18.386443   33408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:18.386755   33408 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:18.386933   33408 main.go:141] libmachine: (ha-076508-m03) Calling .GetState
	I0803 23:15:18.388390   33408 status.go:330] ha-076508-m03 host status = "Running" (err=<nil>)
	I0803 23:15:18.388408   33408 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:18.388711   33408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:18.388750   33408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:18.403506   33408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35879
	I0803 23:15:18.403947   33408 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:18.404451   33408 main.go:141] libmachine: Using API Version  1
	I0803 23:15:18.404471   33408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:18.404849   33408 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:18.405083   33408 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:15:18.408197   33408 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:18.408566   33408 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:18.408600   33408 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:18.408761   33408 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:18.409069   33408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:18.409103   33408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:18.424318   33408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33105
	I0803 23:15:18.424743   33408 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:18.425209   33408 main.go:141] libmachine: Using API Version  1
	I0803 23:15:18.425230   33408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:18.425564   33408 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:18.425770   33408 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:15:18.425965   33408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:18.425986   33408 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:15:18.428979   33408 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:18.429428   33408 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:18.429454   33408 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:18.429648   33408 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:15:18.429825   33408 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:15:18.429951   33408 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:15:18.430091   33408 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:15:18.509058   33408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:18.525112   33408 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:18.525141   33408 api_server.go:166] Checking apiserver status ...
	I0803 23:15:18.525189   33408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:18.539871   33408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup
	W0803 23:15:18.550168   33408 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:18.550219   33408 ssh_runner.go:195] Run: ls
	I0803 23:15:18.554791   33408 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:18.561054   33408 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:18.561080   33408 status.go:422] ha-076508-m03 apiserver status = Running (err=<nil>)
	I0803 23:15:18.561089   33408 status.go:257] ha-076508-m03 status: &{Name:ha-076508-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:18.561104   33408 status.go:255] checking status of ha-076508-m04 ...
	I0803 23:15:18.561496   33408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:18.561535   33408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:18.577222   33408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I0803 23:15:18.577705   33408 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:18.578177   33408 main.go:141] libmachine: Using API Version  1
	I0803 23:15:18.578204   33408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:18.578471   33408 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:18.578697   33408 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:15:18.580498   33408 status.go:330] ha-076508-m04 host status = "Running" (err=<nil>)
	I0803 23:15:18.580513   33408 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:18.580893   33408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:18.580930   33408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:18.597270   33408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I0803 23:15:18.597759   33408 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:18.598193   33408 main.go:141] libmachine: Using API Version  1
	I0803 23:15:18.598214   33408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:18.598547   33408 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:18.598719   33408 main.go:141] libmachine: (ha-076508-m04) Calling .GetIP
	I0803 23:15:18.601602   33408 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:18.602054   33408 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:18.602078   33408 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:18.602395   33408 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:18.602667   33408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:18.602702   33408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:18.618159   33408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38657
	I0803 23:15:18.618548   33408 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:18.619055   33408 main.go:141] libmachine: Using API Version  1
	I0803 23:15:18.619091   33408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:18.619420   33408 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:18.619612   33408 main.go:141] libmachine: (ha-076508-m04) Calling .DriverName
	I0803 23:15:18.619770   33408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:18.619799   33408 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHHostname
	I0803 23:15:18.622643   33408 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:18.623109   33408 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:18.623132   33408 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:18.623281   33408 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHPort
	I0803 23:15:18.623425   33408 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHKeyPath
	I0803 23:15:18.623575   33408 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHUsername
	I0803 23:15:18.623699   33408 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m04/id_rsa Username:docker}
	I0803 23:15:18.709149   33408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:18.723371   33408 status.go:257] ha-076508-m04 status: &{Name:ha-076508-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr: exit status 3 (4.775231624s)

                                                
                                                
-- stdout --
	ha-076508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-076508-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:15:20.133350   33509 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:15:20.133507   33509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:20.133516   33509 out.go:304] Setting ErrFile to fd 2...
	I0803 23:15:20.133520   33509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:20.133704   33509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:15:20.133871   33509 out.go:298] Setting JSON to false
	I0803 23:15:20.133893   33509 mustload.go:65] Loading cluster: ha-076508
	I0803 23:15:20.133924   33509 notify.go:220] Checking for updates...
	I0803 23:15:20.134235   33509 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:15:20.134247   33509 status.go:255] checking status of ha-076508 ...
	I0803 23:15:20.134590   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:20.134646   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:20.155744   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46061
	I0803 23:15:20.156214   33509 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:20.156777   33509 main.go:141] libmachine: Using API Version  1
	I0803 23:15:20.156801   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:20.157189   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:20.157408   33509 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:15:20.159041   33509 status.go:330] ha-076508 host status = "Running" (err=<nil>)
	I0803 23:15:20.159064   33509 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:20.159427   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:20.159466   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:20.175406   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37199
	I0803 23:15:20.175781   33509 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:20.176259   33509 main.go:141] libmachine: Using API Version  1
	I0803 23:15:20.176286   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:20.176648   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:20.176860   33509 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:15:20.179843   33509 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:20.180315   33509 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:20.180354   33509 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:20.180508   33509 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:20.180830   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:20.180869   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:20.195554   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44921
	I0803 23:15:20.195930   33509 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:20.196401   33509 main.go:141] libmachine: Using API Version  1
	I0803 23:15:20.196422   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:20.196728   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:20.196936   33509 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:15:20.197093   33509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:20.197122   33509 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:15:20.199932   33509 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:20.200378   33509 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:20.200410   33509 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:20.200574   33509 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:15:20.200736   33509 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:15:20.200902   33509 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:15:20.201030   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:15:20.281116   33509 ssh_runner.go:195] Run: systemctl --version
	I0803 23:15:20.288751   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:20.306224   33509 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:20.306254   33509 api_server.go:166] Checking apiserver status ...
	I0803 23:15:20.306285   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:20.321033   33509 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W0803 23:15:20.330552   33509 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:20.330612   33509 ssh_runner.go:195] Run: ls
	I0803 23:15:20.339124   33509 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:20.345463   33509 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:20.345495   33509 status.go:422] ha-076508 apiserver status = Running (err=<nil>)
	I0803 23:15:20.345510   33509 status.go:257] ha-076508 status: &{Name:ha-076508 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:20.345532   33509 status.go:255] checking status of ha-076508-m02 ...
	I0803 23:15:20.345955   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:20.346008   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:20.360686   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40459
	I0803 23:15:20.361134   33509 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:20.361674   33509 main.go:141] libmachine: Using API Version  1
	I0803 23:15:20.361696   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:20.361981   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:20.362166   33509 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:15:20.363825   33509 status.go:330] ha-076508-m02 host status = "Running" (err=<nil>)
	I0803 23:15:20.363845   33509 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:15:20.364192   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:20.364227   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:20.378898   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0803 23:15:20.379333   33509 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:20.379843   33509 main.go:141] libmachine: Using API Version  1
	I0803 23:15:20.379874   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:20.380172   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:20.380343   33509 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:15:20.383629   33509 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:20.384171   33509 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:15:20.384201   33509 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:20.384362   33509 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:15:20.384767   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:20.384817   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:20.399202   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36935
	I0803 23:15:20.399660   33509 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:20.400194   33509 main.go:141] libmachine: Using API Version  1
	I0803 23:15:20.400214   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:20.400499   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:20.400697   33509 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:15:20.400892   33509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:20.400908   33509 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:15:20.404246   33509 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:20.404721   33509 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:15:20.404741   33509 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:20.404949   33509 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:15:20.405117   33509 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:15:20.405275   33509 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:15:20.405395   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	W0803 23:15:21.441695   33509 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:21.441745   33509 retry.go:31] will retry after 317.374189ms: dial tcp 192.168.39.245:22: connect: no route to host
	W0803 23:15:24.513669   33509 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0803 23:15:24.513743   33509 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0803 23:15:24.513756   33509 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:24.513765   33509 status.go:257] ha-076508-m02 status: &{Name:ha-076508-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:15:24.513792   33509 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:24.513801   33509 status.go:255] checking status of ha-076508-m03 ...
	I0803 23:15:24.514109   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:24.514148   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:24.530268   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46781
	I0803 23:15:24.530766   33509 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:24.531271   33509 main.go:141] libmachine: Using API Version  1
	I0803 23:15:24.531304   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:24.531597   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:24.531780   33509 main.go:141] libmachine: (ha-076508-m03) Calling .GetState
	I0803 23:15:24.533522   33509 status.go:330] ha-076508-m03 host status = "Running" (err=<nil>)
	I0803 23:15:24.533538   33509 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:24.533944   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:24.533986   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:24.548317   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0803 23:15:24.548671   33509 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:24.549176   33509 main.go:141] libmachine: Using API Version  1
	I0803 23:15:24.549196   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:24.549544   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:24.549734   33509 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:15:24.552366   33509 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:24.552815   33509 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:24.552845   33509 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:24.553050   33509 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:24.553332   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:24.553390   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:24.567598   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I0803 23:15:24.568053   33509 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:24.568551   33509 main.go:141] libmachine: Using API Version  1
	I0803 23:15:24.568575   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:24.568832   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:24.568998   33509 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:15:24.569192   33509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:24.569209   33509 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:15:24.572256   33509 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:24.572706   33509 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:24.572745   33509 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:24.572914   33509 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:15:24.573061   33509 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:15:24.573189   33509 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:15:24.573302   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:15:24.650265   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:24.666876   33509 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:24.666904   33509 api_server.go:166] Checking apiserver status ...
	I0803 23:15:24.666949   33509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:24.685109   33509 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup
	W0803 23:15:24.696938   33509 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:24.696985   33509 ssh_runner.go:195] Run: ls
	I0803 23:15:24.701815   33509 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:24.706295   33509 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:24.706316   33509 status.go:422] ha-076508-m03 apiserver status = Running (err=<nil>)
	I0803 23:15:24.706327   33509 status.go:257] ha-076508-m03 status: &{Name:ha-076508-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:24.706354   33509 status.go:255] checking status of ha-076508-m04 ...
	I0803 23:15:24.706637   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:24.706677   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:24.721825   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0803 23:15:24.722332   33509 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:24.722812   33509 main.go:141] libmachine: Using API Version  1
	I0803 23:15:24.722834   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:24.723127   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:24.723292   33509 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:15:24.724791   33509 status.go:330] ha-076508-m04 host status = "Running" (err=<nil>)
	I0803 23:15:24.724807   33509 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:24.725090   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:24.725127   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:24.739236   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41001
	I0803 23:15:24.739578   33509 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:24.740016   33509 main.go:141] libmachine: Using API Version  1
	I0803 23:15:24.740035   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:24.740334   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:24.740489   33509 main.go:141] libmachine: (ha-076508-m04) Calling .GetIP
	I0803 23:15:24.742961   33509 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:24.743338   33509 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:24.743377   33509 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:24.743477   33509 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:24.743753   33509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:24.743782   33509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:24.758142   33509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0803 23:15:24.758487   33509 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:24.758963   33509 main.go:141] libmachine: Using API Version  1
	I0803 23:15:24.758985   33509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:24.759303   33509 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:24.759526   33509 main.go:141] libmachine: (ha-076508-m04) Calling .DriverName
	I0803 23:15:24.759738   33509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:24.759767   33509 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHHostname
	I0803 23:15:24.762522   33509 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:24.763039   33509 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:24.763063   33509 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:24.763241   33509 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHPort
	I0803 23:15:24.763423   33509 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHKeyPath
	I0803 23:15:24.763599   33509 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHUsername
	I0803 23:15:24.763731   33509 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m04/id_rsa Username:docker}
	I0803 23:15:24.853241   33509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:24.868725   33509 status.go:257] ha-076508-m04 status: &{Name:ha-076508-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr: exit status 3 (3.735876062s)

                                                
                                                
-- stdout --
	ha-076508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-076508-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:15:28.188684   33624 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:15:28.188962   33624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:28.188971   33624 out.go:304] Setting ErrFile to fd 2...
	I0803 23:15:28.188975   33624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:28.189135   33624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:15:28.189287   33624 out.go:298] Setting JSON to false
	I0803 23:15:28.189308   33624 mustload.go:65] Loading cluster: ha-076508
	I0803 23:15:28.189396   33624 notify.go:220] Checking for updates...
	I0803 23:15:28.189813   33624 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:15:28.189837   33624 status.go:255] checking status of ha-076508 ...
	I0803 23:15:28.190270   33624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:28.190340   33624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:28.205627   33624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43117
	I0803 23:15:28.206007   33624 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:28.206605   33624 main.go:141] libmachine: Using API Version  1
	I0803 23:15:28.206625   33624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:28.207017   33624 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:28.207230   33624 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:15:28.208719   33624 status.go:330] ha-076508 host status = "Running" (err=<nil>)
	I0803 23:15:28.208743   33624 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:28.209135   33624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:28.209187   33624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:28.224422   33624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42183
	I0803 23:15:28.224822   33624 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:28.225254   33624 main.go:141] libmachine: Using API Version  1
	I0803 23:15:28.225276   33624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:28.225658   33624 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:28.225861   33624 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:15:28.228599   33624 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:28.229040   33624 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:28.229065   33624 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:28.229208   33624 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:28.229612   33624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:28.229657   33624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:28.244001   33624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44331
	I0803 23:15:28.244436   33624 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:28.244905   33624 main.go:141] libmachine: Using API Version  1
	I0803 23:15:28.244924   33624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:28.245189   33624 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:28.245342   33624 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:15:28.245589   33624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:28.245611   33624 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:15:28.248460   33624 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:28.248878   33624 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:28.248896   33624 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:28.249072   33624 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:15:28.249270   33624 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:15:28.249429   33624 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:15:28.249605   33624 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:15:28.333377   33624 ssh_runner.go:195] Run: systemctl --version
	I0803 23:15:28.339495   33624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:28.355562   33624 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:28.355604   33624 api_server.go:166] Checking apiserver status ...
	I0803 23:15:28.355648   33624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:28.373179   33624 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W0803 23:15:28.384753   33624 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:28.384815   33624 ssh_runner.go:195] Run: ls
	I0803 23:15:28.390110   33624 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:28.396567   33624 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:28.396600   33624 status.go:422] ha-076508 apiserver status = Running (err=<nil>)
	I0803 23:15:28.396614   33624 status.go:257] ha-076508 status: &{Name:ha-076508 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:28.396635   33624 status.go:255] checking status of ha-076508-m02 ...
	I0803 23:15:28.397125   33624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:28.397164   33624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:28.413137   33624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39187
	I0803 23:15:28.413566   33624 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:28.414051   33624 main.go:141] libmachine: Using API Version  1
	I0803 23:15:28.414076   33624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:28.414412   33624 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:28.414600   33624 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:15:28.416154   33624 status.go:330] ha-076508-m02 host status = "Running" (err=<nil>)
	I0803 23:15:28.416170   33624 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:15:28.416436   33624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:28.416480   33624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:28.434282   33624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0803 23:15:28.434710   33624 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:28.435195   33624 main.go:141] libmachine: Using API Version  1
	I0803 23:15:28.435218   33624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:28.435556   33624 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:28.435737   33624 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:15:28.438830   33624 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:28.439288   33624 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:15:28.439312   33624 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:28.439443   33624 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:15:28.439832   33624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:28.439873   33624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:28.454694   33624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40395
	I0803 23:15:28.455091   33624 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:28.455619   33624 main.go:141] libmachine: Using API Version  1
	I0803 23:15:28.455637   33624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:28.455930   33624 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:28.456107   33624 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:15:28.456266   33624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:28.456285   33624 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:15:28.459094   33624 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:28.459502   33624 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:15:28.459543   33624 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:28.459730   33624 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:15:28.459913   33624 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:15:28.460099   33624 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:15:28.460280   33624 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	W0803 23:15:31.525591   33624 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0803 23:15:31.525681   33624 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0803 23:15:31.525704   33624 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:31.525714   33624 status.go:257] ha-076508-m02 status: &{Name:ha-076508-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:15:31.525738   33624 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:31.525748   33624 status.go:255] checking status of ha-076508-m03 ...
	I0803 23:15:31.526079   33624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:31.526117   33624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:31.540568   33624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45363
	I0803 23:15:31.541023   33624 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:31.541560   33624 main.go:141] libmachine: Using API Version  1
	I0803 23:15:31.541585   33624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:31.541869   33624 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:31.542031   33624 main.go:141] libmachine: (ha-076508-m03) Calling .GetState
	I0803 23:15:31.543581   33624 status.go:330] ha-076508-m03 host status = "Running" (err=<nil>)
	I0803 23:15:31.543598   33624 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:31.543916   33624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:31.543971   33624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:31.560481   33624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I0803 23:15:31.560954   33624 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:31.561428   33624 main.go:141] libmachine: Using API Version  1
	I0803 23:15:31.561447   33624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:31.561769   33624 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:31.561957   33624 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:15:31.564863   33624 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:31.565278   33624 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:31.565302   33624 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:31.565482   33624 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:31.565905   33624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:31.565946   33624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:31.580350   33624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45709
	I0803 23:15:31.580782   33624 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:31.581277   33624 main.go:141] libmachine: Using API Version  1
	I0803 23:15:31.581298   33624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:31.581660   33624 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:31.581847   33624 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:15:31.582022   33624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:31.582040   33624 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:15:31.585097   33624 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:31.585589   33624 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:31.585622   33624 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:31.585770   33624 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:15:31.585918   33624 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:15:31.586098   33624 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:15:31.586242   33624 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:15:31.665320   33624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:31.682451   33624 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:31.682482   33624 api_server.go:166] Checking apiserver status ...
	I0803 23:15:31.682522   33624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:31.702622   33624 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup
	W0803 23:15:31.713116   33624 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:31.713163   33624 ssh_runner.go:195] Run: ls
	I0803 23:15:31.717542   33624 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:31.721959   33624 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:31.721984   33624 status.go:422] ha-076508-m03 apiserver status = Running (err=<nil>)
	I0803 23:15:31.721995   33624 status.go:257] ha-076508-m03 status: &{Name:ha-076508-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:31.722012   33624 status.go:255] checking status of ha-076508-m04 ...
	I0803 23:15:31.722311   33624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:31.722341   33624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:31.736880   33624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0803 23:15:31.737381   33624 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:31.737888   33624 main.go:141] libmachine: Using API Version  1
	I0803 23:15:31.737919   33624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:31.738243   33624 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:31.738413   33624 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:15:31.739983   33624 status.go:330] ha-076508-m04 host status = "Running" (err=<nil>)
	I0803 23:15:31.740000   33624 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:31.740268   33624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:31.740303   33624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:31.754715   33624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46793
	I0803 23:15:31.755128   33624 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:31.755545   33624 main.go:141] libmachine: Using API Version  1
	I0803 23:15:31.755567   33624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:31.755876   33624 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:31.756049   33624 main.go:141] libmachine: (ha-076508-m04) Calling .GetIP
	I0803 23:15:31.758722   33624 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:31.759164   33624 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:31.759189   33624 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:31.759336   33624 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:31.759621   33624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:31.759669   33624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:31.776414   33624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0803 23:15:31.776816   33624 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:31.777401   33624 main.go:141] libmachine: Using API Version  1
	I0803 23:15:31.777422   33624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:31.777693   33624 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:31.777879   33624 main.go:141] libmachine: (ha-076508-m04) Calling .DriverName
	I0803 23:15:31.778089   33624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:31.778119   33624 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHHostname
	I0803 23:15:31.781112   33624 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:31.781524   33624 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:31.781549   33624 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:31.781679   33624 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHPort
	I0803 23:15:31.781820   33624 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHKeyPath
	I0803 23:15:31.781967   33624 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHUsername
	I0803 23:15:31.782083   33624 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m04/id_rsa Username:docker}
	I0803 23:15:31.865083   33624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:31.881185   33624 status.go:257] ha-076508-m04 status: &{Name:ha-076508-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr: exit status 3 (4.380634207s)

                                                
                                                
-- stdout --
	ha-076508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-076508-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:15:33.934797   33724 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:15:33.935051   33724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:33.935060   33724 out.go:304] Setting ErrFile to fd 2...
	I0803 23:15:33.935065   33724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:33.935247   33724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:15:33.935401   33724 out.go:298] Setting JSON to false
	I0803 23:15:33.935422   33724 mustload.go:65] Loading cluster: ha-076508
	I0803 23:15:33.935540   33724 notify.go:220] Checking for updates...
	I0803 23:15:33.935798   33724 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:15:33.935813   33724 status.go:255] checking status of ha-076508 ...
	I0803 23:15:33.936310   33724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:33.936375   33724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:33.954121   33724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40937
	I0803 23:15:33.954608   33724 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:33.955218   33724 main.go:141] libmachine: Using API Version  1
	I0803 23:15:33.955239   33724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:33.955645   33724 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:33.955896   33724 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:15:33.957643   33724 status.go:330] ha-076508 host status = "Running" (err=<nil>)
	I0803 23:15:33.957657   33724 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:33.957961   33724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:33.957996   33724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:33.972380   33724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I0803 23:15:33.972825   33724 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:33.973290   33724 main.go:141] libmachine: Using API Version  1
	I0803 23:15:33.973311   33724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:33.973629   33724 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:33.973817   33724 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:15:33.976311   33724 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:33.976676   33724 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:33.976704   33724 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:33.976811   33724 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:33.977139   33724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:33.977182   33724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:33.991932   33724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35565
	I0803 23:15:33.992344   33724 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:33.992841   33724 main.go:141] libmachine: Using API Version  1
	I0803 23:15:33.992864   33724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:33.993302   33724 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:33.993535   33724 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:15:33.993735   33724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:33.993761   33724 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:15:33.996322   33724 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:33.996769   33724 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:33.996788   33724 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:33.996996   33724 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:15:33.997153   33724 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:15:33.997283   33724 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:15:33.997380   33724 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:15:34.081128   33724 ssh_runner.go:195] Run: systemctl --version
	I0803 23:15:34.087555   33724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:34.102764   33724 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:34.102791   33724 api_server.go:166] Checking apiserver status ...
	I0803 23:15:34.102821   33724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:34.116938   33724 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W0803 23:15:34.126518   33724 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:34.126579   33724 ssh_runner.go:195] Run: ls
	I0803 23:15:34.131156   33724 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:34.135895   33724 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:34.135918   33724 status.go:422] ha-076508 apiserver status = Running (err=<nil>)
	I0803 23:15:34.135928   33724 status.go:257] ha-076508 status: &{Name:ha-076508 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:34.135942   33724 status.go:255] checking status of ha-076508-m02 ...
	I0803 23:15:34.136290   33724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:34.136327   33724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:34.151767   33724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I0803 23:15:34.152234   33724 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:34.152696   33724 main.go:141] libmachine: Using API Version  1
	I0803 23:15:34.152719   33724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:34.153018   33724 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:34.153205   33724 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:15:34.154652   33724 status.go:330] ha-076508-m02 host status = "Running" (err=<nil>)
	I0803 23:15:34.154665   33724 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:15:34.154957   33724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:34.155020   33724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:34.170091   33724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0803 23:15:34.170491   33724 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:34.170889   33724 main.go:141] libmachine: Using API Version  1
	I0803 23:15:34.170912   33724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:34.171173   33724 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:34.171351   33724 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:15:34.174342   33724 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:34.174741   33724 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:15:34.174772   33724 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:34.174887   33724 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:15:34.175187   33724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:34.175219   33724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:34.189461   33724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38805
	I0803 23:15:34.189876   33724 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:34.190329   33724 main.go:141] libmachine: Using API Version  1
	I0803 23:15:34.190346   33724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:34.190653   33724 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:34.190830   33724 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:15:34.191018   33724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:34.191037   33724 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:15:34.193756   33724 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:34.194201   33724 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:15:34.194229   33724 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:34.194370   33724 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:15:34.194563   33724 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:15:34.194710   33724 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:15:34.194861   33724 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	W0803 23:15:34.593592   33724 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:34.593650   33724 retry.go:31] will retry after 264.428221ms: dial tcp 192.168.39.245:22: connect: no route to host
	W0803 23:15:37.921599   33724 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0803 23:15:37.921705   33724 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0803 23:15:37.921725   33724 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:37.921733   33724 status.go:257] ha-076508-m02 status: &{Name:ha-076508-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:15:37.921750   33724 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:37.921758   33724 status.go:255] checking status of ha-076508-m03 ...
	I0803 23:15:37.922145   33724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:37.922187   33724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:37.936693   33724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39949
	I0803 23:15:37.937174   33724 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:37.937695   33724 main.go:141] libmachine: Using API Version  1
	I0803 23:15:37.937718   33724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:37.937989   33724 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:37.938151   33724 main.go:141] libmachine: (ha-076508-m03) Calling .GetState
	I0803 23:15:37.939514   33724 status.go:330] ha-076508-m03 host status = "Running" (err=<nil>)
	I0803 23:15:37.939532   33724 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:37.939815   33724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:37.939845   33724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:37.954201   33724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I0803 23:15:37.954651   33724 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:37.955117   33724 main.go:141] libmachine: Using API Version  1
	I0803 23:15:37.955138   33724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:37.955430   33724 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:37.955597   33724 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:15:37.957901   33724 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:37.958282   33724 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:37.958314   33724 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:37.958425   33724 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:37.958708   33724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:37.958742   33724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:37.973575   33724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0803 23:15:37.974009   33724 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:37.974500   33724 main.go:141] libmachine: Using API Version  1
	I0803 23:15:37.974521   33724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:37.974796   33724 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:37.975005   33724 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:15:37.975198   33724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:37.975221   33724 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:15:37.978169   33724 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:37.978608   33724 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:37.978633   33724 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:37.978805   33724 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:15:37.978963   33724 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:15:37.979123   33724 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:15:37.979251   33724 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:15:38.058331   33724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:38.074232   33724 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:38.074260   33724 api_server.go:166] Checking apiserver status ...
	I0803 23:15:38.074300   33724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:38.088519   33724 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup
	W0803 23:15:38.098520   33724 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:38.098588   33724 ssh_runner.go:195] Run: ls
	I0803 23:15:38.103287   33724 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:38.107854   33724 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:38.107879   33724 status.go:422] ha-076508-m03 apiserver status = Running (err=<nil>)
	I0803 23:15:38.107887   33724 status.go:257] ha-076508-m03 status: &{Name:ha-076508-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:38.107900   33724 status.go:255] checking status of ha-076508-m04 ...
	I0803 23:15:38.108189   33724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:38.108220   33724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:38.122900   33724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I0803 23:15:38.123335   33724 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:38.123826   33724 main.go:141] libmachine: Using API Version  1
	I0803 23:15:38.123850   33724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:38.124204   33724 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:38.124396   33724 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:15:38.126001   33724 status.go:330] ha-076508-m04 host status = "Running" (err=<nil>)
	I0803 23:15:38.126034   33724 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:38.126394   33724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:38.126452   33724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:38.140926   33724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0803 23:15:38.141308   33724 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:38.141803   33724 main.go:141] libmachine: Using API Version  1
	I0803 23:15:38.141827   33724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:38.142123   33724 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:38.142323   33724 main.go:141] libmachine: (ha-076508-m04) Calling .GetIP
	I0803 23:15:38.145258   33724 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:38.145708   33724 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:38.145753   33724 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:38.145877   33724 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:38.146261   33724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:38.146299   33724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:38.160983   33724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36935
	I0803 23:15:38.161435   33724 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:38.161938   33724 main.go:141] libmachine: Using API Version  1
	I0803 23:15:38.161966   33724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:38.162276   33724 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:38.162482   33724 main.go:141] libmachine: (ha-076508-m04) Calling .DriverName
	I0803 23:15:38.162690   33724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:38.162710   33724 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHHostname
	I0803 23:15:38.165874   33724 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:38.166274   33724 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:38.166298   33724 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:38.166414   33724 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHPort
	I0803 23:15:38.166577   33724 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHKeyPath
	I0803 23:15:38.166722   33724 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHUsername
	I0803 23:15:38.166834   33724 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m04/id_rsa Username:docker}
	I0803 23:15:38.256859   33724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:38.272411   33724 status.go:257] ha-076508-m04 status: &{Name:ha-076508-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr: exit status 3 (3.743403806s)

                                                
                                                
-- stdout --
	ha-076508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-076508-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:15:44.314888   33840 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:15:44.315021   33840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:44.315030   33840 out.go:304] Setting ErrFile to fd 2...
	I0803 23:15:44.315035   33840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:44.315212   33840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:15:44.315403   33840 out.go:298] Setting JSON to false
	I0803 23:15:44.315435   33840 mustload.go:65] Loading cluster: ha-076508
	I0803 23:15:44.315528   33840 notify.go:220] Checking for updates...
	I0803 23:15:44.315891   33840 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:15:44.315908   33840 status.go:255] checking status of ha-076508 ...
	I0803 23:15:44.316262   33840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:44.316322   33840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:44.335772   33840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45305
	I0803 23:15:44.336232   33840 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:44.336850   33840 main.go:141] libmachine: Using API Version  1
	I0803 23:15:44.336883   33840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:44.337174   33840 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:44.337309   33840 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:15:44.338869   33840 status.go:330] ha-076508 host status = "Running" (err=<nil>)
	I0803 23:15:44.338892   33840 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:44.339214   33840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:44.339254   33840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:44.353414   33840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0803 23:15:44.353918   33840 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:44.354391   33840 main.go:141] libmachine: Using API Version  1
	I0803 23:15:44.354418   33840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:44.354684   33840 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:44.354909   33840 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:15:44.357532   33840 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:44.357944   33840 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:44.357983   33840 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:44.358103   33840 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:44.358428   33840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:44.358471   33840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:44.372731   33840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38689
	I0803 23:15:44.373212   33840 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:44.373697   33840 main.go:141] libmachine: Using API Version  1
	I0803 23:15:44.373724   33840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:44.374090   33840 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:44.374262   33840 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:15:44.374469   33840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:44.374505   33840 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:15:44.377469   33840 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:44.377896   33840 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:44.377919   33840 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:44.378075   33840 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:15:44.378234   33840 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:15:44.378380   33840 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:15:44.378483   33840 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:15:44.461329   33840 ssh_runner.go:195] Run: systemctl --version
	I0803 23:15:44.467746   33840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:44.483741   33840 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:44.483771   33840 api_server.go:166] Checking apiserver status ...
	I0803 23:15:44.483820   33840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:44.501425   33840 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W0803 23:15:44.512516   33840 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:44.512578   33840 ssh_runner.go:195] Run: ls
	I0803 23:15:44.517621   33840 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:44.522011   33840 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:44.522032   33840 status.go:422] ha-076508 apiserver status = Running (err=<nil>)
	I0803 23:15:44.522042   33840 status.go:257] ha-076508 status: &{Name:ha-076508 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:44.522057   33840 status.go:255] checking status of ha-076508-m02 ...
	I0803 23:15:44.522348   33840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:44.522379   33840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:44.537263   33840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46879
	I0803 23:15:44.537682   33840 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:44.538159   33840 main.go:141] libmachine: Using API Version  1
	I0803 23:15:44.538180   33840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:44.538511   33840 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:44.538705   33840 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:15:44.540654   33840 status.go:330] ha-076508-m02 host status = "Running" (err=<nil>)
	I0803 23:15:44.540673   33840 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:15:44.540961   33840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:44.540994   33840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:44.555929   33840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40439
	I0803 23:15:44.556392   33840 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:44.557487   33840 main.go:141] libmachine: Using API Version  1
	I0803 23:15:44.557511   33840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:44.557895   33840 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:44.558108   33840 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:15:44.560928   33840 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:44.561382   33840 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:15:44.561405   33840 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:44.561585   33840 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:15:44.561903   33840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:44.561957   33840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:44.577281   33840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46747
	I0803 23:15:44.577728   33840 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:44.578179   33840 main.go:141] libmachine: Using API Version  1
	I0803 23:15:44.578198   33840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:44.578506   33840 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:44.578723   33840 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:15:44.578945   33840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:44.578968   33840 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:15:44.581750   33840 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:44.582349   33840 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:15:44.582372   33840 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:15:44.582603   33840 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:15:44.582757   33840 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:15:44.582884   33840 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:15:44.582981   33840 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	W0803 23:15:47.649579   33840 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0803 23:15:47.649701   33840 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0803 23:15:47.649726   33840 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:47.649736   33840 status.go:257] ha-076508-m02 status: &{Name:ha-076508-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:15:47.649756   33840 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0803 23:15:47.649766   33840 status.go:255] checking status of ha-076508-m03 ...
	I0803 23:15:47.650233   33840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:47.650289   33840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:47.665744   33840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45283
	I0803 23:15:47.666179   33840 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:47.666662   33840 main.go:141] libmachine: Using API Version  1
	I0803 23:15:47.666686   33840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:47.666956   33840 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:47.667143   33840 main.go:141] libmachine: (ha-076508-m03) Calling .GetState
	I0803 23:15:47.668717   33840 status.go:330] ha-076508-m03 host status = "Running" (err=<nil>)
	I0803 23:15:47.668736   33840 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:47.669026   33840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:47.669062   33840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:47.684906   33840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0803 23:15:47.685398   33840 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:47.685888   33840 main.go:141] libmachine: Using API Version  1
	I0803 23:15:47.685912   33840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:47.686238   33840 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:47.686471   33840 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:15:47.689730   33840 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:47.690177   33840 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:47.690205   33840 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:47.690310   33840 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:47.690637   33840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:47.690675   33840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:47.706127   33840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0803 23:15:47.706609   33840 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:47.707113   33840 main.go:141] libmachine: Using API Version  1
	I0803 23:15:47.707140   33840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:47.707424   33840 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:47.707588   33840 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:15:47.707784   33840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:47.707816   33840 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:15:47.710562   33840 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:47.710923   33840 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:47.710957   33840 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:47.711084   33840 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:15:47.711294   33840 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:15:47.711453   33840 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:15:47.711590   33840 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:15:47.794686   33840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:47.816220   33840 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:47.816244   33840 api_server.go:166] Checking apiserver status ...
	I0803 23:15:47.816272   33840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:47.832723   33840 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup
	W0803 23:15:47.844977   33840 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:47.845024   33840 ssh_runner.go:195] Run: ls
	I0803 23:15:47.850084   33840 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:47.854512   33840 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:47.854534   33840 status.go:422] ha-076508-m03 apiserver status = Running (err=<nil>)
	I0803 23:15:47.854541   33840 status.go:257] ha-076508-m03 status: &{Name:ha-076508-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:47.854555   33840 status.go:255] checking status of ha-076508-m04 ...
	I0803 23:15:47.854832   33840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:47.854867   33840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:47.870233   33840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44141
	I0803 23:15:47.870697   33840 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:47.871146   33840 main.go:141] libmachine: Using API Version  1
	I0803 23:15:47.871175   33840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:47.871509   33840 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:47.871665   33840 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:15:47.873350   33840 status.go:330] ha-076508-m04 host status = "Running" (err=<nil>)
	I0803 23:15:47.873379   33840 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:47.873729   33840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:47.873777   33840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:47.889620   33840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33001
	I0803 23:15:47.890091   33840 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:47.890634   33840 main.go:141] libmachine: Using API Version  1
	I0803 23:15:47.890661   33840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:47.891009   33840 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:47.891217   33840 main.go:141] libmachine: (ha-076508-m04) Calling .GetIP
	I0803 23:15:47.894165   33840 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:47.894566   33840 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:47.894596   33840 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:47.894742   33840 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:47.895140   33840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:47.895184   33840 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:47.910595   33840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0803 23:15:47.911027   33840 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:47.911564   33840 main.go:141] libmachine: Using API Version  1
	I0803 23:15:47.911586   33840 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:47.911912   33840 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:47.912066   33840 main.go:141] libmachine: (ha-076508-m04) Calling .DriverName
	I0803 23:15:47.912244   33840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:47.912263   33840 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHHostname
	I0803 23:15:47.915080   33840 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:47.915506   33840 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:47.915527   33840 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:47.915693   33840 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHPort
	I0803 23:15:47.915868   33840 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHKeyPath
	I0803 23:15:47.916018   33840 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHUsername
	I0803 23:15:47.916154   33840 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m04/id_rsa Username:docker}
	I0803 23:15:48.001023   33840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:48.016309   33840 status.go:257] ha-076508-m04 status: &{Name:ha-076508-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
E0803 23:15:58.007454   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr: exit status 7 (664.118248ms)

                                                
                                                
-- stdout --
	ha-076508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-076508-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:15:57.454276   33992 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:15:57.454383   33992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:57.454394   33992 out.go:304] Setting ErrFile to fd 2...
	I0803 23:15:57.454398   33992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:15:57.454594   33992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:15:57.454787   33992 out.go:298] Setting JSON to false
	I0803 23:15:57.454815   33992 mustload.go:65] Loading cluster: ha-076508
	I0803 23:15:57.454852   33992 notify.go:220] Checking for updates...
	I0803 23:15:57.455205   33992 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:15:57.455219   33992 status.go:255] checking status of ha-076508 ...
	I0803 23:15:57.455606   33992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:57.455672   33992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:57.474959   33992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I0803 23:15:57.475344   33992 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:57.475913   33992 main.go:141] libmachine: Using API Version  1
	I0803 23:15:57.475932   33992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:57.476347   33992 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:57.476550   33992 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:15:57.478392   33992 status.go:330] ha-076508 host status = "Running" (err=<nil>)
	I0803 23:15:57.478430   33992 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:57.478701   33992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:57.478732   33992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:57.494078   33992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34261
	I0803 23:15:57.494465   33992 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:57.494882   33992 main.go:141] libmachine: Using API Version  1
	I0803 23:15:57.494902   33992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:57.495235   33992 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:57.495408   33992 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:15:57.498212   33992 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:57.498585   33992 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:57.498656   33992 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:57.498767   33992 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:15:57.499035   33992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:57.499064   33992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:57.514092   33992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37727
	I0803 23:15:57.514464   33992 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:57.514857   33992 main.go:141] libmachine: Using API Version  1
	I0803 23:15:57.514876   33992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:57.515223   33992 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:57.515430   33992 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:15:57.515604   33992 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:57.515647   33992 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:15:57.518285   33992 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:57.518630   33992 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:15:57.518655   33992 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:15:57.518824   33992 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:15:57.518994   33992 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:15:57.519139   33992 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:15:57.519255   33992 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:15:57.608114   33992 ssh_runner.go:195] Run: systemctl --version
	I0803 23:15:57.615062   33992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:57.634543   33992 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:57.634565   33992 api_server.go:166] Checking apiserver status ...
	I0803 23:15:57.634593   33992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:57.662512   33992 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W0803 23:15:57.676358   33992 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:57.676419   33992 ssh_runner.go:195] Run: ls
	I0803 23:15:57.683078   33992 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:57.687666   33992 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:57.687688   33992 status.go:422] ha-076508 apiserver status = Running (err=<nil>)
	I0803 23:15:57.687697   33992 status.go:257] ha-076508 status: &{Name:ha-076508 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:57.687711   33992 status.go:255] checking status of ha-076508-m02 ...
	I0803 23:15:57.688047   33992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:57.688090   33992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:57.703723   33992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39793
	I0803 23:15:57.704097   33992 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:57.704561   33992 main.go:141] libmachine: Using API Version  1
	I0803 23:15:57.704581   33992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:57.704876   33992 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:57.705096   33992 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:15:57.706902   33992 status.go:330] ha-076508-m02 host status = "Stopped" (err=<nil>)
	I0803 23:15:57.706914   33992 status.go:343] host is not running, skipping remaining checks
	I0803 23:15:57.706919   33992 status.go:257] ha-076508-m02 status: &{Name:ha-076508-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:57.706938   33992 status.go:255] checking status of ha-076508-m03 ...
	I0803 23:15:57.707320   33992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:57.707371   33992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:57.722259   33992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0803 23:15:57.722743   33992 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:57.723208   33992 main.go:141] libmachine: Using API Version  1
	I0803 23:15:57.723229   33992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:57.723617   33992 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:57.723833   33992 main.go:141] libmachine: (ha-076508-m03) Calling .GetState
	I0803 23:15:57.725424   33992 status.go:330] ha-076508-m03 host status = "Running" (err=<nil>)
	I0803 23:15:57.725441   33992 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:57.725825   33992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:57.725870   33992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:57.741987   33992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41961
	I0803 23:15:57.742398   33992 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:57.742931   33992 main.go:141] libmachine: Using API Version  1
	I0803 23:15:57.742952   33992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:57.743311   33992 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:57.743520   33992 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:15:57.746464   33992 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:57.746923   33992 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:57.746950   33992 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:57.747129   33992 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:15:57.747439   33992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:57.747481   33992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:57.762574   33992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0803 23:15:57.763045   33992 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:57.763485   33992 main.go:141] libmachine: Using API Version  1
	I0803 23:15:57.763513   33992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:57.763814   33992 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:57.763961   33992 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:15:57.764139   33992 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:57.764156   33992 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:15:57.767276   33992 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:57.767711   33992 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:15:57.767737   33992 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:15:57.767875   33992 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:15:57.768028   33992 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:15:57.768188   33992 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:15:57.768326   33992 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:15:57.845493   33992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:57.863673   33992 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:15:57.863702   33992 api_server.go:166] Checking apiserver status ...
	I0803 23:15:57.863750   33992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:15:57.887025   33992 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup
	W0803 23:15:57.902901   33992 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:15:57.902956   33992 ssh_runner.go:195] Run: ls
	I0803 23:15:57.911225   33992 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:15:57.915499   33992 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:15:57.915523   33992 status.go:422] ha-076508-m03 apiserver status = Running (err=<nil>)
	I0803 23:15:57.915531   33992 status.go:257] ha-076508-m03 status: &{Name:ha-076508-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:15:57.915545   33992 status.go:255] checking status of ha-076508-m04 ...
	I0803 23:15:57.915822   33992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:57.915852   33992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:57.930297   33992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34893
	I0803 23:15:57.930676   33992 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:57.931103   33992 main.go:141] libmachine: Using API Version  1
	I0803 23:15:57.931124   33992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:57.931450   33992 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:57.931639   33992 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:15:57.933081   33992 status.go:330] ha-076508-m04 host status = "Running" (err=<nil>)
	I0803 23:15:57.933102   33992 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:57.933386   33992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:57.933437   33992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:57.947918   33992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0803 23:15:57.948313   33992 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:57.948777   33992 main.go:141] libmachine: Using API Version  1
	I0803 23:15:57.948797   33992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:57.949116   33992 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:57.949283   33992 main.go:141] libmachine: (ha-076508-m04) Calling .GetIP
	I0803 23:15:57.951736   33992 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:57.952205   33992 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:57.952237   33992 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:57.952375   33992 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:15:57.952687   33992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:15:57.952722   33992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:15:57.967692   33992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46531
	I0803 23:15:57.968133   33992 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:15:57.968588   33992 main.go:141] libmachine: Using API Version  1
	I0803 23:15:57.968607   33992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:15:57.968914   33992 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:15:57.969090   33992 main.go:141] libmachine: (ha-076508-m04) Calling .DriverName
	I0803 23:15:57.969311   33992 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:15:57.969334   33992 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHHostname
	I0803 23:15:57.972124   33992 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:57.972569   33992 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:15:57.972594   33992 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:15:57.972709   33992 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHPort
	I0803 23:15:57.972858   33992 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHKeyPath
	I0803 23:15:57.973017   33992 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHUsername
	I0803 23:15:57.973168   33992 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m04/id_rsa Username:docker}
	I0803 23:15:58.061828   33992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:15:58.077613   33992 status.go:257] ha-076508-m04 status: &{Name:ha-076508-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr: exit status 7 (633.306185ms)

                                                
                                                
-- stdout --
	ha-076508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-076508-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:16:05.326965   34081 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:16:05.327198   34081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:16:05.327206   34081 out.go:304] Setting ErrFile to fd 2...
	I0803 23:16:05.327210   34081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:16:05.327398   34081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:16:05.327561   34081 out.go:298] Setting JSON to false
	I0803 23:16:05.327581   34081 mustload.go:65] Loading cluster: ha-076508
	I0803 23:16:05.327617   34081 notify.go:220] Checking for updates...
	I0803 23:16:05.327928   34081 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:16:05.327940   34081 status.go:255] checking status of ha-076508 ...
	I0803 23:16:05.328292   34081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:16:05.328352   34081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:16:05.344083   34081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0803 23:16:05.344483   34081 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:16:05.345125   34081 main.go:141] libmachine: Using API Version  1
	I0803 23:16:05.345157   34081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:16:05.345601   34081 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:16:05.345814   34081 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:16:05.347434   34081 status.go:330] ha-076508 host status = "Running" (err=<nil>)
	I0803 23:16:05.347461   34081 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:16:05.347873   34081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:16:05.347930   34081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:16:05.362972   34081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0803 23:16:05.363426   34081 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:16:05.363898   34081 main.go:141] libmachine: Using API Version  1
	I0803 23:16:05.363920   34081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:16:05.364212   34081 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:16:05.364412   34081 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:16:05.367374   34081 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:16:05.367832   34081 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:16:05.367875   34081 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:16:05.367994   34081 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:16:05.368442   34081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:16:05.368488   34081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:16:05.384396   34081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0803 23:16:05.384846   34081 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:16:05.385348   34081 main.go:141] libmachine: Using API Version  1
	I0803 23:16:05.385395   34081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:16:05.385729   34081 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:16:05.385969   34081 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:16:05.386148   34081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:16:05.386175   34081 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:16:05.389198   34081 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:16:05.389723   34081 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:16:05.389754   34081 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:16:05.389951   34081 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:16:05.390127   34081 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:16:05.390277   34081 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:16:05.390414   34081 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:16:05.478194   34081 ssh_runner.go:195] Run: systemctl --version
	I0803 23:16:05.484410   34081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:16:05.502815   34081 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:16:05.502841   34081 api_server.go:166] Checking apiserver status ...
	I0803 23:16:05.502878   34081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:16:05.518762   34081 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W0803 23:16:05.528513   34081 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:16:05.528578   34081 ssh_runner.go:195] Run: ls
	I0803 23:16:05.533381   34081 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:16:05.539474   34081 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:16:05.539502   34081 status.go:422] ha-076508 apiserver status = Running (err=<nil>)
	I0803 23:16:05.539515   34081 status.go:257] ha-076508 status: &{Name:ha-076508 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:16:05.539533   34081 status.go:255] checking status of ha-076508-m02 ...
	I0803 23:16:05.539874   34081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:16:05.539915   34081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:16:05.555702   34081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41549
	I0803 23:16:05.556135   34081 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:16:05.556548   34081 main.go:141] libmachine: Using API Version  1
	I0803 23:16:05.556575   34081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:16:05.556993   34081 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:16:05.557216   34081 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:16:05.558785   34081 status.go:330] ha-076508-m02 host status = "Stopped" (err=<nil>)
	I0803 23:16:05.558799   34081 status.go:343] host is not running, skipping remaining checks
	I0803 23:16:05.558814   34081 status.go:257] ha-076508-m02 status: &{Name:ha-076508-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:16:05.558833   34081 status.go:255] checking status of ha-076508-m03 ...
	I0803 23:16:05.559124   34081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:16:05.559169   34081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:16:05.576874   34081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38063
	I0803 23:16:05.577310   34081 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:16:05.577740   34081 main.go:141] libmachine: Using API Version  1
	I0803 23:16:05.577767   34081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:16:05.578056   34081 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:16:05.578265   34081 main.go:141] libmachine: (ha-076508-m03) Calling .GetState
	I0803 23:16:05.579704   34081 status.go:330] ha-076508-m03 host status = "Running" (err=<nil>)
	I0803 23:16:05.579720   34081 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:16:05.580018   34081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:16:05.580049   34081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:16:05.596435   34081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I0803 23:16:05.596823   34081 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:16:05.597306   34081 main.go:141] libmachine: Using API Version  1
	I0803 23:16:05.597326   34081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:16:05.597673   34081 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:16:05.597883   34081 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:16:05.600607   34081 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:16:05.601066   34081 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:16:05.601102   34081 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:16:05.601331   34081 host.go:66] Checking if "ha-076508-m03" exists ...
	I0803 23:16:05.601770   34081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:16:05.601817   34081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:16:05.617928   34081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35651
	I0803 23:16:05.618432   34081 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:16:05.618974   34081 main.go:141] libmachine: Using API Version  1
	I0803 23:16:05.618992   34081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:16:05.619321   34081 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:16:05.619576   34081 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:16:05.619789   34081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:16:05.619812   34081 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:16:05.622424   34081 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:16:05.622830   34081 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:16:05.622852   34081 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:16:05.623009   34081 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:16:05.623182   34081 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:16:05.623340   34081 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:16:05.623469   34081 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:16:05.701601   34081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:16:05.716192   34081 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:16:05.716213   34081 api_server.go:166] Checking apiserver status ...
	I0803 23:16:05.716252   34081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:16:05.729990   34081 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup
	W0803 23:16:05.739972   34081 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1565/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:16:05.740100   34081 ssh_runner.go:195] Run: ls
	I0803 23:16:05.745024   34081 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:16:05.749458   34081 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:16:05.749491   34081 status.go:422] ha-076508-m03 apiserver status = Running (err=<nil>)
	I0803 23:16:05.749503   34081 status.go:257] ha-076508-m03 status: &{Name:ha-076508-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:16:05.749522   34081 status.go:255] checking status of ha-076508-m04 ...
	I0803 23:16:05.749846   34081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:16:05.749880   34081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:16:05.765300   34081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0803 23:16:05.765760   34081 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:16:05.766341   34081 main.go:141] libmachine: Using API Version  1
	I0803 23:16:05.766368   34081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:16:05.766744   34081 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:16:05.767023   34081 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:16:05.768883   34081 status.go:330] ha-076508-m04 host status = "Running" (err=<nil>)
	I0803 23:16:05.768901   34081 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:16:05.769195   34081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:16:05.769232   34081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:16:05.784322   34081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37275
	I0803 23:16:05.784775   34081 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:16:05.785258   34081 main.go:141] libmachine: Using API Version  1
	I0803 23:16:05.785278   34081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:16:05.785622   34081 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:16:05.785803   34081 main.go:141] libmachine: (ha-076508-m04) Calling .GetIP
	I0803 23:16:05.788688   34081 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:16:05.789152   34081 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:16:05.789177   34081 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:16:05.789328   34081 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:16:05.789690   34081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:16:05.789726   34081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:16:05.804386   34081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I0803 23:16:05.804861   34081 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:16:05.805341   34081 main.go:141] libmachine: Using API Version  1
	I0803 23:16:05.805381   34081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:16:05.805679   34081 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:16:05.805897   34081 main.go:141] libmachine: (ha-076508-m04) Calling .DriverName
	I0803 23:16:05.806110   34081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:16:05.806130   34081 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHHostname
	I0803 23:16:05.808656   34081 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:16:05.809029   34081 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:16:05.809049   34081 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:16:05.809207   34081 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHPort
	I0803 23:16:05.809442   34081 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHKeyPath
	I0803 23:16:05.809629   34081 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHUsername
	I0803 23:16:05.809749   34081 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m04/id_rsa Username:docker}
	I0803 23:16:05.897892   34081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:16:05.914035   34081 status.go:257] ha-076508-m04 status: &{Name:ha-076508-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-076508 -n ha-076508
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-076508 logs -n 25: (1.56177464s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508:/home/docker/cp-test_ha-076508-m03_ha-076508.txt                      |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508 sudo cat                                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m03_ha-076508.txt                                |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m02:/home/docker/cp-test_ha-076508-m03_ha-076508-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m02 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m03_ha-076508-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04:/home/docker/cp-test_ha-076508-m03_ha-076508-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m04 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m03_ha-076508-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-076508 cp testdata/cp-test.txt                                               | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile214764297/001/cp-test_ha-076508-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508:/home/docker/cp-test_ha-076508-m04_ha-076508.txt                      |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508 sudo cat                                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m04_ha-076508.txt                                |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m02:/home/docker/cp-test_ha-076508-m04_ha-076508-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m02 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m04_ha-076508-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03:/home/docker/cp-test_ha-076508-m04_ha-076508-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m03 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m04_ha-076508-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-076508 node stop m02 -v=7                                                    | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-076508 node start m02 -v=7                                                   | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:15 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:06:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:06:47.489970   28167 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:06:47.490222   28167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:06:47.490230   28167 out.go:304] Setting ErrFile to fd 2...
	I0803 23:06:47.490240   28167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:06:47.490404   28167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:06:47.490927   28167 out.go:298] Setting JSON to false
	I0803 23:06:47.491735   28167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2951,"bootTime":1722723456,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:06:47.491789   28167 start.go:139] virtualization: kvm guest
	I0803 23:06:47.494029   28167 out.go:177] * [ha-076508] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:06:47.495467   28167 notify.go:220] Checking for updates...
	I0803 23:06:47.495541   28167 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:06:47.497026   28167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:06:47.498858   28167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:06:47.500281   28167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:06:47.501865   28167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:06:47.503382   28167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:06:47.504936   28167 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:06:47.540276   28167 out.go:177] * Using the kvm2 driver based on user configuration
	I0803 23:06:47.541636   28167 start.go:297] selected driver: kvm2
	I0803 23:06:47.541650   28167 start.go:901] validating driver "kvm2" against <nil>
	I0803 23:06:47.541665   28167 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:06:47.542627   28167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:06:47.542715   28167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:06:47.557706   28167 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:06:47.557763   28167 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 23:06:47.558059   28167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:06:47.558133   28167 cni.go:84] Creating CNI manager for ""
	I0803 23:06:47.558145   28167 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0803 23:06:47.558159   28167 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0803 23:06:47.558221   28167 start.go:340] cluster config:
	{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0803 23:06:47.558344   28167 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:06:47.560165   28167 out.go:177] * Starting "ha-076508" primary control-plane node in "ha-076508" cluster
	I0803 23:06:47.561417   28167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:06:47.561457   28167 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:06:47.561465   28167 cache.go:56] Caching tarball of preloaded images
	I0803 23:06:47.561558   28167 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:06:47.561573   28167 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:06:47.561866   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:06:47.561887   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json: {Name:mke12aaae1c6c743b80b12da59b5b860742452dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:06:47.562034   28167 start.go:360] acquireMachinesLock for ha-076508: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:06:47.562069   28167 start.go:364] duration metric: took 19.4µs to acquireMachinesLock for "ha-076508"
	I0803 23:06:47.562091   28167 start.go:93] Provisioning new machine with config: &{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:06:47.562165   28167 start.go:125] createHost starting for "" (driver="kvm2")
	I0803 23:06:47.563789   28167 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:06:47.563905   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:06:47.563951   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:06:47.578194   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
	I0803 23:06:47.578649   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:06:47.579128   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:06:47.579147   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:06:47.579513   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:06:47.579672   28167 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:06:47.579781   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:06:47.579969   28167 start.go:159] libmachine.API.Create for "ha-076508" (driver="kvm2")
	I0803 23:06:47.580000   28167 client.go:168] LocalClient.Create starting
	I0803 23:06:47.580039   28167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0803 23:06:47.580071   28167 main.go:141] libmachine: Decoding PEM data...
	I0803 23:06:47.580086   28167 main.go:141] libmachine: Parsing certificate...
	I0803 23:06:47.580153   28167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0803 23:06:47.580172   28167 main.go:141] libmachine: Decoding PEM data...
	I0803 23:06:47.580185   28167 main.go:141] libmachine: Parsing certificate...
	I0803 23:06:47.580198   28167 main.go:141] libmachine: Running pre-create checks...
	I0803 23:06:47.580210   28167 main.go:141] libmachine: (ha-076508) Calling .PreCreateCheck
	I0803 23:06:47.580557   28167 main.go:141] libmachine: (ha-076508) Calling .GetConfigRaw
	I0803 23:06:47.580958   28167 main.go:141] libmachine: Creating machine...
	I0803 23:06:47.580971   28167 main.go:141] libmachine: (ha-076508) Calling .Create
	I0803 23:06:47.581080   28167 main.go:141] libmachine: (ha-076508) Creating KVM machine...
	I0803 23:06:47.582143   28167 main.go:141] libmachine: (ha-076508) DBG | found existing default KVM network
	I0803 23:06:47.582776   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:47.582645   28190 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0803 23:06:47.582798   28167 main.go:141] libmachine: (ha-076508) DBG | created network xml: 
	I0803 23:06:47.582816   28167 main.go:141] libmachine: (ha-076508) DBG | <network>
	I0803 23:06:47.582832   28167 main.go:141] libmachine: (ha-076508) DBG |   <name>mk-ha-076508</name>
	I0803 23:06:47.582843   28167 main.go:141] libmachine: (ha-076508) DBG |   <dns enable='no'/>
	I0803 23:06:47.582852   28167 main.go:141] libmachine: (ha-076508) DBG |   
	I0803 23:06:47.582858   28167 main.go:141] libmachine: (ha-076508) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0803 23:06:47.582865   28167 main.go:141] libmachine: (ha-076508) DBG |     <dhcp>
	I0803 23:06:47.582871   28167 main.go:141] libmachine: (ha-076508) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0803 23:06:47.582878   28167 main.go:141] libmachine: (ha-076508) DBG |     </dhcp>
	I0803 23:06:47.582884   28167 main.go:141] libmachine: (ha-076508) DBG |   </ip>
	I0803 23:06:47.582888   28167 main.go:141] libmachine: (ha-076508) DBG |   
	I0803 23:06:47.582894   28167 main.go:141] libmachine: (ha-076508) DBG | </network>
	I0803 23:06:47.582900   28167 main.go:141] libmachine: (ha-076508) DBG | 
	I0803 23:06:47.587879   28167 main.go:141] libmachine: (ha-076508) DBG | trying to create private KVM network mk-ha-076508 192.168.39.0/24...
	I0803 23:06:47.651816   28167 main.go:141] libmachine: (ha-076508) DBG | private KVM network mk-ha-076508 192.168.39.0/24 created
	I0803 23:06:47.651871   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:47.651776   28190 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:06:47.651884   28167 main.go:141] libmachine: (ha-076508) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508 ...
	I0803 23:06:47.651905   28167 main.go:141] libmachine: (ha-076508) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:06:47.651921   28167 main.go:141] libmachine: (ha-076508) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:06:47.895582   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:47.895470   28190 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa...
	I0803 23:06:47.984578   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:47.984431   28190 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/ha-076508.rawdisk...
	I0803 23:06:47.984607   28167 main.go:141] libmachine: (ha-076508) DBG | Writing magic tar header
	I0803 23:06:47.984622   28167 main.go:141] libmachine: (ha-076508) DBG | Writing SSH key tar header
	I0803 23:06:47.984667   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:47.984541   28190 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508 ...
	I0803 23:06:47.984680   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508
	I0803 23:06:47.984697   28167 main.go:141] libmachine: (ha-076508) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508 (perms=drwx------)
	I0803 23:06:47.984714   28167 main.go:141] libmachine: (ha-076508) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:06:47.984737   28167 main.go:141] libmachine: (ha-076508) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0803 23:06:47.984750   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0803 23:06:47.984759   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:06:47.984765   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0803 23:06:47.984774   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:06:47.984786   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:06:47.984799   28167 main.go:141] libmachine: (ha-076508) DBG | Checking permissions on dir: /home
	I0803 23:06:47.984811   28167 main.go:141] libmachine: (ha-076508) DBG | Skipping /home - not owner
	I0803 23:06:47.984829   28167 main.go:141] libmachine: (ha-076508) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0803 23:06:47.984848   28167 main.go:141] libmachine: (ha-076508) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:06:47.984859   28167 main.go:141] libmachine: (ha-076508) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:06:47.984870   28167 main.go:141] libmachine: (ha-076508) Creating domain...
	I0803 23:06:47.985932   28167 main.go:141] libmachine: (ha-076508) define libvirt domain using xml: 
	I0803 23:06:47.985954   28167 main.go:141] libmachine: (ha-076508) <domain type='kvm'>
	I0803 23:06:47.985976   28167 main.go:141] libmachine: (ha-076508)   <name>ha-076508</name>
	I0803 23:06:47.985990   28167 main.go:141] libmachine: (ha-076508)   <memory unit='MiB'>2200</memory>
	I0803 23:06:47.986002   28167 main.go:141] libmachine: (ha-076508)   <vcpu>2</vcpu>
	I0803 23:06:47.986012   28167 main.go:141] libmachine: (ha-076508)   <features>
	I0803 23:06:47.986026   28167 main.go:141] libmachine: (ha-076508)     <acpi/>
	I0803 23:06:47.986036   28167 main.go:141] libmachine: (ha-076508)     <apic/>
	I0803 23:06:47.986062   28167 main.go:141] libmachine: (ha-076508)     <pae/>
	I0803 23:06:47.986081   28167 main.go:141] libmachine: (ha-076508)     
	I0803 23:06:47.986088   28167 main.go:141] libmachine: (ha-076508)   </features>
	I0803 23:06:47.986105   28167 main.go:141] libmachine: (ha-076508)   <cpu mode='host-passthrough'>
	I0803 23:06:47.986113   28167 main.go:141] libmachine: (ha-076508)   
	I0803 23:06:47.986117   28167 main.go:141] libmachine: (ha-076508)   </cpu>
	I0803 23:06:47.986124   28167 main.go:141] libmachine: (ha-076508)   <os>
	I0803 23:06:47.986129   28167 main.go:141] libmachine: (ha-076508)     <type>hvm</type>
	I0803 23:06:47.986136   28167 main.go:141] libmachine: (ha-076508)     <boot dev='cdrom'/>
	I0803 23:06:47.986142   28167 main.go:141] libmachine: (ha-076508)     <boot dev='hd'/>
	I0803 23:06:47.986148   28167 main.go:141] libmachine: (ha-076508)     <bootmenu enable='no'/>
	I0803 23:06:47.986156   28167 main.go:141] libmachine: (ha-076508)   </os>
	I0803 23:06:47.986175   28167 main.go:141] libmachine: (ha-076508)   <devices>
	I0803 23:06:47.986201   28167 main.go:141] libmachine: (ha-076508)     <disk type='file' device='cdrom'>
	I0803 23:06:47.986217   28167 main.go:141] libmachine: (ha-076508)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/boot2docker.iso'/>
	I0803 23:06:47.986233   28167 main.go:141] libmachine: (ha-076508)       <target dev='hdc' bus='scsi'/>
	I0803 23:06:47.986262   28167 main.go:141] libmachine: (ha-076508)       <readonly/>
	I0803 23:06:47.986280   28167 main.go:141] libmachine: (ha-076508)     </disk>
	I0803 23:06:47.986295   28167 main.go:141] libmachine: (ha-076508)     <disk type='file' device='disk'>
	I0803 23:06:47.986311   28167 main.go:141] libmachine: (ha-076508)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:06:47.986327   28167 main.go:141] libmachine: (ha-076508)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/ha-076508.rawdisk'/>
	I0803 23:06:47.986338   28167 main.go:141] libmachine: (ha-076508)       <target dev='hda' bus='virtio'/>
	I0803 23:06:47.986350   28167 main.go:141] libmachine: (ha-076508)     </disk>
	I0803 23:06:47.986360   28167 main.go:141] libmachine: (ha-076508)     <interface type='network'>
	I0803 23:06:47.986372   28167 main.go:141] libmachine: (ha-076508)       <source network='mk-ha-076508'/>
	I0803 23:06:47.986382   28167 main.go:141] libmachine: (ha-076508)       <model type='virtio'/>
	I0803 23:06:47.986392   28167 main.go:141] libmachine: (ha-076508)     </interface>
	I0803 23:06:47.986410   28167 main.go:141] libmachine: (ha-076508)     <interface type='network'>
	I0803 23:06:47.986426   28167 main.go:141] libmachine: (ha-076508)       <source network='default'/>
	I0803 23:06:47.986436   28167 main.go:141] libmachine: (ha-076508)       <model type='virtio'/>
	I0803 23:06:47.986443   28167 main.go:141] libmachine: (ha-076508)     </interface>
	I0803 23:06:47.986452   28167 main.go:141] libmachine: (ha-076508)     <serial type='pty'>
	I0803 23:06:47.986462   28167 main.go:141] libmachine: (ha-076508)       <target port='0'/>
	I0803 23:06:47.986474   28167 main.go:141] libmachine: (ha-076508)     </serial>
	I0803 23:06:47.986484   28167 main.go:141] libmachine: (ha-076508)     <console type='pty'>
	I0803 23:06:47.986507   28167 main.go:141] libmachine: (ha-076508)       <target type='serial' port='0'/>
	I0803 23:06:47.986526   28167 main.go:141] libmachine: (ha-076508)     </console>
	I0803 23:06:47.986536   28167 main.go:141] libmachine: (ha-076508)     <rng model='virtio'>
	I0803 23:06:47.986549   28167 main.go:141] libmachine: (ha-076508)       <backend model='random'>/dev/random</backend>
	I0803 23:06:47.986559   28167 main.go:141] libmachine: (ha-076508)     </rng>
	I0803 23:06:47.986566   28167 main.go:141] libmachine: (ha-076508)     
	I0803 23:06:47.986580   28167 main.go:141] libmachine: (ha-076508)     
	I0803 23:06:47.986591   28167 main.go:141] libmachine: (ha-076508)   </devices>
	I0803 23:06:47.986600   28167 main.go:141] libmachine: (ha-076508) </domain>
	I0803 23:06:47.986612   28167 main.go:141] libmachine: (ha-076508) 
	I0803 23:06:47.990359   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:ee:29:e0 in network default
	I0803 23:06:47.990927   28167 main.go:141] libmachine: (ha-076508) Ensuring networks are active...
	I0803 23:06:47.990950   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:47.991615   28167 main.go:141] libmachine: (ha-076508) Ensuring network default is active
	I0803 23:06:47.991947   28167 main.go:141] libmachine: (ha-076508) Ensuring network mk-ha-076508 is active
	I0803 23:06:47.992429   28167 main.go:141] libmachine: (ha-076508) Getting domain xml...
	I0803 23:06:47.993139   28167 main.go:141] libmachine: (ha-076508) Creating domain...
	I0803 23:06:49.172673   28167 main.go:141] libmachine: (ha-076508) Waiting to get IP...
	I0803 23:06:49.173616   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:49.174072   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:49.174094   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:49.174055   28190 retry.go:31] will retry after 299.048685ms: waiting for machine to come up
	I0803 23:06:49.474639   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:49.475036   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:49.475065   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:49.474985   28190 retry.go:31] will retry after 364.349968ms: waiting for machine to come up
	I0803 23:06:49.840548   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:49.841056   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:49.841086   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:49.841028   28190 retry.go:31] will retry after 363.489429ms: waiting for machine to come up
	I0803 23:06:50.206557   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:50.206963   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:50.206989   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:50.206887   28190 retry.go:31] will retry after 401.199995ms: waiting for machine to come up
	I0803 23:06:50.609300   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:50.609723   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:50.609756   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:50.609668   28190 retry.go:31] will retry after 523.568123ms: waiting for machine to come up
	I0803 23:06:51.134353   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:51.134834   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:51.134858   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:51.134771   28190 retry.go:31] will retry after 668.196356ms: waiting for machine to come up
	I0803 23:06:51.804536   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:51.804899   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:51.804938   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:51.804860   28190 retry.go:31] will retry after 746.059023ms: waiting for machine to come up
	I0803 23:06:52.552683   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:52.553161   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:52.553186   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:52.553111   28190 retry.go:31] will retry after 983.956736ms: waiting for machine to come up
	I0803 23:06:53.538479   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:53.538881   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:53.538901   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:53.538827   28190 retry.go:31] will retry after 1.575987073s: waiting for machine to come up
	I0803 23:06:55.116547   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:55.116933   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:55.116958   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:55.116890   28190 retry.go:31] will retry after 1.6753366s: waiting for machine to come up
	I0803 23:06:56.794713   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:56.795125   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:56.795151   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:56.795100   28190 retry.go:31] will retry after 1.978262602s: waiting for machine to come up
	I0803 23:06:58.775186   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:06:58.775682   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:06:58.775699   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:06:58.775638   28190 retry.go:31] will retry after 2.58504789s: waiting for machine to come up
	I0803 23:07:01.364479   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:01.364842   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:07:01.364866   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:07:01.364802   28190 retry.go:31] will retry after 3.09859595s: waiting for machine to come up
	I0803 23:07:04.465537   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:04.465910   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find current IP address of domain ha-076508 in network mk-ha-076508
	I0803 23:07:04.465931   28167 main.go:141] libmachine: (ha-076508) DBG | I0803 23:07:04.465871   28190 retry.go:31] will retry after 4.249791833s: waiting for machine to come up
	I0803 23:07:08.717607   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:08.718056   28167 main.go:141] libmachine: (ha-076508) Found IP for machine: 192.168.39.154
	I0803 23:07:08.718075   28167 main.go:141] libmachine: (ha-076508) Reserving static IP address...
	I0803 23:07:08.718088   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has current primary IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:08.718437   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find host DHCP lease matching {name: "ha-076508", mac: "52:54:00:04:c7:ad", ip: "192.168.39.154"} in network mk-ha-076508
	I0803 23:07:08.791835   28167 main.go:141] libmachine: (ha-076508) Reserved static IP address: 192.168.39.154
	I0803 23:07:08.791856   28167 main.go:141] libmachine: (ha-076508) Waiting for SSH to be available...
	I0803 23:07:08.791863   28167 main.go:141] libmachine: (ha-076508) DBG | Getting to WaitForSSH function...
	I0803 23:07:08.794443   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:08.794792   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508
	I0803 23:07:08.794816   28167 main.go:141] libmachine: (ha-076508) DBG | unable to find defined IP address of network mk-ha-076508 interface with MAC address 52:54:00:04:c7:ad
	I0803 23:07:08.794991   28167 main.go:141] libmachine: (ha-076508) DBG | Using SSH client type: external
	I0803 23:07:08.795016   28167 main.go:141] libmachine: (ha-076508) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa (-rw-------)
	I0803 23:07:08.795050   28167 main.go:141] libmachine: (ha-076508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:07:08.795057   28167 main.go:141] libmachine: (ha-076508) DBG | About to run SSH command:
	I0803 23:07:08.795066   28167 main.go:141] libmachine: (ha-076508) DBG | exit 0
	I0803 23:07:08.799217   28167 main.go:141] libmachine: (ha-076508) DBG | SSH cmd err, output: exit status 255: 
	I0803 23:07:08.799237   28167 main.go:141] libmachine: (ha-076508) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0803 23:07:08.799246   28167 main.go:141] libmachine: (ha-076508) DBG | command : exit 0
	I0803 23:07:08.799253   28167 main.go:141] libmachine: (ha-076508) DBG | err     : exit status 255
	I0803 23:07:08.799264   28167 main.go:141] libmachine: (ha-076508) DBG | output  : 
	I0803 23:07:11.801425   28167 main.go:141] libmachine: (ha-076508) DBG | Getting to WaitForSSH function...
	I0803 23:07:11.803779   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:11.804325   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:11.804371   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:11.804535   28167 main.go:141] libmachine: (ha-076508) DBG | Using SSH client type: external
	I0803 23:07:11.804565   28167 main.go:141] libmachine: (ha-076508) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa (-rw-------)
	I0803 23:07:11.804586   28167 main.go:141] libmachine: (ha-076508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:07:11.804598   28167 main.go:141] libmachine: (ha-076508) DBG | About to run SSH command:
	I0803 23:07:11.804618   28167 main.go:141] libmachine: (ha-076508) DBG | exit 0
	I0803 23:07:11.933600   28167 main.go:141] libmachine: (ha-076508) DBG | SSH cmd err, output: <nil>: 
	I0803 23:07:11.933845   28167 main.go:141] libmachine: (ha-076508) KVM machine creation complete!
	I0803 23:07:11.934170   28167 main.go:141] libmachine: (ha-076508) Calling .GetConfigRaw
	I0803 23:07:11.934761   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:11.935003   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:11.935207   28167 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:07:11.935223   28167 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:07:11.936615   28167 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:07:11.936629   28167 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:07:11.936634   28167 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:07:11.936640   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:11.939026   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:11.939414   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:11.939441   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:11.939597   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:11.939771   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:11.939942   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:11.940107   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:11.940274   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:07:11.940529   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:07:11.940546   28167 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:07:12.049051   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:07:12.049076   28167 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:07:12.049085   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:12.052089   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.052517   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.052539   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.052764   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:12.052954   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.053105   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.053271   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:12.053468   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:07:12.053682   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:07:12.053695   28167 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:07:12.162371   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:07:12.162443   28167 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:07:12.162453   28167 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:07:12.162462   28167 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:07:12.162766   28167 buildroot.go:166] provisioning hostname "ha-076508"
	I0803 23:07:12.162795   28167 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:07:12.163114   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:12.166049   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.166444   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.166475   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.166632   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:12.166805   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.166994   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.167126   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:12.167297   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:07:12.167478   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:07:12.167494   28167 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076508 && echo "ha-076508" | sudo tee /etc/hostname
	I0803 23:07:12.292153   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076508
	
	I0803 23:07:12.292176   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:12.295092   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.295463   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.295489   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.295638   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:12.295830   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.295976   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.296089   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:12.296243   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:07:12.296441   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:07:12.296458   28167 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076508/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:07:12.414678   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:07:12.414705   28167 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 23:07:12.414725   28167 buildroot.go:174] setting up certificates
	I0803 23:07:12.414737   28167 provision.go:84] configureAuth start
	I0803 23:07:12.414749   28167 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:07:12.415054   28167 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:07:12.417608   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.417930   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.417956   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.418066   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:12.420424   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.420899   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.420922   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.421045   28167 provision.go:143] copyHostCerts
	I0803 23:07:12.421075   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:07:12.421132   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0803 23:07:12.421142   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:07:12.421225   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 23:07:12.421365   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:07:12.421395   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0803 23:07:12.421405   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:07:12.421449   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 23:07:12.421617   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:07:12.421652   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0803 23:07:12.421661   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:07:12.421712   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 23:07:12.421792   28167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.ha-076508 san=[127.0.0.1 192.168.39.154 ha-076508 localhost minikube]
	I0803 23:07:12.819787   28167 provision.go:177] copyRemoteCerts
	I0803 23:07:12.819849   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:07:12.819871   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:12.822738   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.823158   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.823190   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.823305   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:12.823489   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.823678   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:12.823831   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:07:12.907870   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:07:12.907938   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 23:07:12.932838   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:07:12.932923   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0803 23:07:12.957956   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:07:12.958024   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0803 23:07:12.983173   28167 provision.go:87] duration metric: took 568.422623ms to configureAuth
	I0803 23:07:12.983203   28167 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:07:12.983362   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:07:12.983432   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:12.985912   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.986294   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:12.986324   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:12.986487   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:12.986682   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.986874   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:12.986971   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:12.987122   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:07:12.987281   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:07:12.987297   28167 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:07:13.258685   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:07:13.258721   28167 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:07:13.258732   28167 main.go:141] libmachine: (ha-076508) Calling .GetURL
	I0803 23:07:13.260040   28167 main.go:141] libmachine: (ha-076508) DBG | Using libvirt version 6000000
	I0803 23:07:13.262246   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.262620   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.262649   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.262820   28167 main.go:141] libmachine: Docker is up and running!
	I0803 23:07:13.262834   28167 main.go:141] libmachine: Reticulating splines...
	I0803 23:07:13.262841   28167 client.go:171] duration metric: took 25.682831089s to LocalClient.Create
	I0803 23:07:13.262862   28167 start.go:167] duration metric: took 25.682893298s to libmachine.API.Create "ha-076508"
	I0803 23:07:13.262870   28167 start.go:293] postStartSetup for "ha-076508" (driver="kvm2")
	I0803 23:07:13.262880   28167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:07:13.262896   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:13.263137   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:07:13.263159   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:13.265085   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.265469   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.265497   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.265630   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:13.265806   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:13.265943   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:13.266114   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:07:13.352825   28167 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:07:13.357277   28167 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:07:13.357300   28167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 23:07:13.357375   28167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 23:07:13.357448   28167 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0803 23:07:13.357458   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0803 23:07:13.357542   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:07:13.368303   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:07:13.394754   28167 start.go:296] duration metric: took 131.872279ms for postStartSetup
	I0803 23:07:13.394801   28167 main.go:141] libmachine: (ha-076508) Calling .GetConfigRaw
	I0803 23:07:13.395357   28167 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:07:13.397766   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.398067   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.398093   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.398287   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:07:13.398476   28167 start.go:128] duration metric: took 25.836297699s to createHost
	I0803 23:07:13.398499   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:13.400608   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.400865   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.400892   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.401050   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:13.401230   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:13.401394   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:13.401513   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:13.401651   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:07:13.401817   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:07:13.401834   28167 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:07:13.514455   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722726433.492512851
	
	I0803 23:07:13.514477   28167 fix.go:216] guest clock: 1722726433.492512851
	I0803 23:07:13.514485   28167 fix.go:229] Guest: 2024-08-03 23:07:13.492512851 +0000 UTC Remote: 2024-08-03 23:07:13.398488875 +0000 UTC m=+25.941429857 (delta=94.023976ms)
	I0803 23:07:13.514520   28167 fix.go:200] guest clock delta is within tolerance: 94.023976ms
	I0803 23:07:13.514527   28167 start.go:83] releasing machines lock for "ha-076508", held for 25.952446969s
	I0803 23:07:13.514543   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:13.514834   28167 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:07:13.517401   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.517793   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.517815   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.517978   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:13.518494   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:13.518633   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:13.518709   28167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:07:13.518748   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:13.518833   28167 ssh_runner.go:195] Run: cat /version.json
	I0803 23:07:13.518855   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:13.521510   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.521708   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.521925   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.521948   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.522090   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:13.522110   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:13.522134   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:13.522304   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:13.522307   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:13.522472   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:13.522474   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:13.522662   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:13.522677   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:07:13.522810   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:07:13.620582   28167 ssh_runner.go:195] Run: systemctl --version
	I0803 23:07:13.626624   28167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:07:13.790848   28167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:07:13.796926   28167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:07:13.796988   28167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:07:13.814400   28167 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:07:13.814425   28167 start.go:495] detecting cgroup driver to use...
	I0803 23:07:13.814481   28167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:07:13.831090   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:07:13.846834   28167 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:07:13.846891   28167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:07:13.862395   28167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:07:13.879388   28167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:07:14.014543   28167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:07:14.171743   28167 docker.go:233] disabling docker service ...
	I0803 23:07:14.171799   28167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:07:14.187004   28167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:07:14.200675   28167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:07:14.313247   28167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:07:14.422410   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:07:14.437475   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:07:14.457628   28167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:07:14.457699   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.469513   28167 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:07:14.469645   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.482373   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.493984   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.505308   28167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:07:14.516663   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.528037   28167 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.546046   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:07:14.557885   28167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:07:14.568691   28167 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:07:14.568744   28167 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:07:14.583280   28167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:07:14.593878   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:07:14.701783   28167 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:07:14.855293   28167 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:07:14.855386   28167 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:07:14.861520   28167 start.go:563] Will wait 60s for crictl version
	I0803 23:07:14.861569   28167 ssh_runner.go:195] Run: which crictl
	I0803 23:07:14.865747   28167 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:07:14.906262   28167 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:07:14.906349   28167 ssh_runner.go:195] Run: crio --version
	I0803 23:07:14.934547   28167 ssh_runner.go:195] Run: crio --version
	I0803 23:07:14.964520   28167 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:07:14.965845   28167 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:07:14.968597   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:14.969165   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:14.969195   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:14.969466   28167 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:07:14.973838   28167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:07:14.987577   28167 kubeadm.go:883] updating cluster {Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:07:14.987669   28167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:07:14.987710   28167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:07:15.027512   28167 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0803 23:07:15.027595   28167 ssh_runner.go:195] Run: which lz4
	I0803 23:07:15.031844   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0803 23:07:15.031955   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0803 23:07:15.036494   28167 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 23:07:15.036528   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0803 23:07:16.510128   28167 crio.go:462] duration metric: took 1.478209536s to copy over tarball
	I0803 23:07:16.510209   28167 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 23:07:18.736437   28167 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.226201481s)
	I0803 23:07:18.736463   28167 crio.go:469] duration metric: took 2.226302648s to extract the tarball
	I0803 23:07:18.736472   28167 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 23:07:18.775687   28167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:07:18.821770   28167 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:07:18.821797   28167 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:07:18.821807   28167 kubeadm.go:934] updating node { 192.168.39.154 8443 v1.30.3 crio true true} ...
	I0803 23:07:18.821941   28167 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:07:18.822014   28167 ssh_runner.go:195] Run: crio config
	I0803 23:07:18.867888   28167 cni.go:84] Creating CNI manager for ""
	I0803 23:07:18.867905   28167 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0803 23:07:18.867918   28167 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:07:18.867938   28167 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-076508 NodeName:ha-076508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:07:18.868077   28167 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-076508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:07:18.868108   28167 kube-vip.go:115] generating kube-vip config ...
	I0803 23:07:18.868154   28167 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:07:18.885252   28167 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:07:18.885387   28167 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:07:18.885486   28167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:07:18.896065   28167 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:07:18.896128   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0803 23:07:18.906028   28167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0803 23:07:18.923637   28167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:07:18.940633   28167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0803 23:07:18.957557   28167 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0803 23:07:18.974793   28167 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:07:18.978897   28167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:07:18.991740   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:07:19.118712   28167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:07:19.136049   28167 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508 for IP: 192.168.39.154
	I0803 23:07:19.136070   28167 certs.go:194] generating shared ca certs ...
	I0803 23:07:19.136111   28167 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.136274   28167 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 23:07:19.136332   28167 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 23:07:19.136346   28167 certs.go:256] generating profile certs ...
	I0803 23:07:19.136410   28167 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key
	I0803 23:07:19.136427   28167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.crt with IP's: []
	I0803 23:07:19.399368   28167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.crt ...
	I0803 23:07:19.399399   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.crt: {Name:mk6c61cc1c71006c9038d48e8a7e1f6b49511ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.399595   28167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key ...
	I0803 23:07:19.399610   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key: {Name:mk95344414c61542ea81c8b8742957ef5d931958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.399714   28167 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.6dec06fe
	I0803 23:07:19.399732   28167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.6dec06fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.154 192.168.39.254]
	I0803 23:07:19.564196   28167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.6dec06fe ...
	I0803 23:07:19.564227   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.6dec06fe: {Name:mkbfa31a03e37b87508ca9c99c62a5672518f21d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.564406   28167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.6dec06fe ...
	I0803 23:07:19.564422   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.6dec06fe: {Name:mk7aded0581795aecb14ff48f72570c22d39bf16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.564514   28167 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.6dec06fe -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt
	I0803 23:07:19.564630   28167 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.6dec06fe -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key
	I0803 23:07:19.564726   28167 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key
	I0803 23:07:19.564746   28167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt with IP's: []
	I0803 23:07:19.643530   28167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt ...
	I0803 23:07:19.643561   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt: {Name:mkd930a11b608539f35e44a6b66f29dc5cce84b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.643739   28167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key ...
	I0803 23:07:19.643762   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key: {Name:mk676ce01dd626e5d9c0506670645a6d47a52163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:19.643874   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:07:19.643899   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:07:19.643915   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:07:19.643932   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:07:19.643951   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:07:19.643976   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:07:19.643994   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:07:19.644012   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:07:19.644087   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0803 23:07:19.644137   28167 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0803 23:07:19.644151   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 23:07:19.644188   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 23:07:19.644222   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:07:19.644254   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 23:07:19.644311   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:07:19.644382   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0803 23:07:19.644409   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:07:19.644428   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0803 23:07:19.645581   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:07:19.673209   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:07:19.699940   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:07:19.725941   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 23:07:19.751358   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0803 23:07:19.779048   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 23:07:19.809834   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:07:19.838255   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:07:19.867259   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0803 23:07:19.896216   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:07:19.942432   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0803 23:07:19.977596   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:07:19.995235   28167 ssh_runner.go:195] Run: openssl version
	I0803 23:07:20.001027   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0803 23:07:20.012012   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0803 23:07:20.016390   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:07:20.016446   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0803 23:07:20.022301   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:07:20.032815   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:07:20.043547   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:07:20.048128   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:07:20.048184   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:07:20.053963   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:07:20.065029   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0803 23:07:20.076923   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0803 23:07:20.081664   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:07:20.081729   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0803 23:07:20.087575   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0803 23:07:20.098494   28167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:07:20.102837   28167 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:07:20.102898   28167 kubeadm.go:392] StartCluster: {Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:07:20.102967   28167 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:07:20.103041   28167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:07:20.139202   28167 cri.go:89] found id: ""
	I0803 23:07:20.139275   28167 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 23:07:20.149748   28167 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 23:07:20.159745   28167 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 23:07:20.169671   28167 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 23:07:20.169689   28167 kubeadm.go:157] found existing configuration files:
	
	I0803 23:07:20.169727   28167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0803 23:07:20.179245   28167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 23:07:20.179295   28167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 23:07:20.189110   28167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0803 23:07:20.198522   28167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 23:07:20.198585   28167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 23:07:20.208568   28167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0803 23:07:20.217742   28167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 23:07:20.217793   28167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 23:07:20.227256   28167 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0803 23:07:20.236550   28167 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 23:07:20.236596   28167 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 23:07:20.246261   28167 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 23:07:20.487650   28167 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 23:07:31.490269   28167 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0803 23:07:31.490343   28167 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 23:07:31.490439   28167 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 23:07:31.490548   28167 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 23:07:31.490651   28167 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 23:07:31.490748   28167 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 23:07:31.492491   28167 out.go:204]   - Generating certificates and keys ...
	I0803 23:07:31.492578   28167 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 23:07:31.492650   28167 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 23:07:31.492733   28167 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0803 23:07:31.492811   28167 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0803 23:07:31.492896   28167 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0803 23:07:31.492966   28167 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0803 23:07:31.493046   28167 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0803 23:07:31.493181   28167 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-076508 localhost] and IPs [192.168.39.154 127.0.0.1 ::1]
	I0803 23:07:31.493273   28167 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0803 23:07:31.493450   28167 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-076508 localhost] and IPs [192.168.39.154 127.0.0.1 ::1]
	I0803 23:07:31.493549   28167 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0803 23:07:31.493649   28167 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0803 23:07:31.493687   28167 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0803 23:07:31.493734   28167 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 23:07:31.493776   28167 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 23:07:31.493823   28167 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0803 23:07:31.493880   28167 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 23:07:31.493959   28167 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 23:07:31.494024   28167 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 23:07:31.494134   28167 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 23:07:31.494225   28167 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 23:07:31.495749   28167 out.go:204]   - Booting up control plane ...
	I0803 23:07:31.495834   28167 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 23:07:31.495902   28167 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 23:07:31.495980   28167 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 23:07:31.496079   28167 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 23:07:31.496161   28167 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 23:07:31.496202   28167 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 23:07:31.496319   28167 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0803 23:07:31.496410   28167 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0803 23:07:31.496471   28167 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001670926s
	I0803 23:07:31.496568   28167 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0803 23:07:31.496645   28167 kubeadm.go:310] [api-check] The API server is healthy after 5.827086685s
	I0803 23:07:31.496769   28167 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 23:07:31.496896   28167 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 23:07:31.496986   28167 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 23:07:31.497130   28167 kubeadm.go:310] [mark-control-plane] Marking the node ha-076508 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 23:07:31.497223   28167 kubeadm.go:310] [bootstrap-token] Using token: y24y8s.6ynp5uqn81rz378h
	I0803 23:07:31.499530   28167 out.go:204]   - Configuring RBAC rules ...
	I0803 23:07:31.499637   28167 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 23:07:31.499718   28167 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 23:07:31.499853   28167 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 23:07:31.499970   28167 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 23:07:31.500067   28167 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 23:07:31.500142   28167 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 23:07:31.500247   28167 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 23:07:31.500324   28167 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 23:07:31.500402   28167 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 23:07:31.500411   28167 kubeadm.go:310] 
	I0803 23:07:31.500490   28167 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 23:07:31.500498   28167 kubeadm.go:310] 
	I0803 23:07:31.500602   28167 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 23:07:31.500613   28167 kubeadm.go:310] 
	I0803 23:07:31.500644   28167 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 23:07:31.500693   28167 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 23:07:31.500735   28167 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 23:07:31.500741   28167 kubeadm.go:310] 
	I0803 23:07:31.500788   28167 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 23:07:31.500797   28167 kubeadm.go:310] 
	I0803 23:07:31.500841   28167 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 23:07:31.500847   28167 kubeadm.go:310] 
	I0803 23:07:31.500914   28167 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 23:07:31.500987   28167 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 23:07:31.501050   28167 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 23:07:31.501059   28167 kubeadm.go:310] 
	I0803 23:07:31.501125   28167 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 23:07:31.501197   28167 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 23:07:31.501205   28167 kubeadm.go:310] 
	I0803 23:07:31.501276   28167 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y24y8s.6ynp5uqn81rz378h \
	I0803 23:07:31.501377   28167 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0803 23:07:31.501403   28167 kubeadm.go:310] 	--control-plane 
	I0803 23:07:31.501407   28167 kubeadm.go:310] 
	I0803 23:07:31.501475   28167 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 23:07:31.501481   28167 kubeadm.go:310] 
	I0803 23:07:31.501550   28167 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y24y8s.6ynp5uqn81rz378h \
	I0803 23:07:31.501643   28167 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0803 23:07:31.501653   28167 cni.go:84] Creating CNI manager for ""
	I0803 23:07:31.501658   28167 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0803 23:07:31.503192   28167 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0803 23:07:31.504428   28167 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0803 23:07:31.510517   28167 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0803 23:07:31.510535   28167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0803 23:07:31.528827   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0803 23:07:31.917829   28167 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 23:07:31.917902   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076508 minikube.k8s.io/updated_at=2024_08_03T23_07_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=ha-076508 minikube.k8s.io/primary=true
	I0803 23:07:31.917908   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:31.956804   28167 ops.go:34] apiserver oom_adj: -16
	I0803 23:07:32.096120   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:32.597167   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:33.096957   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:33.596999   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:34.096832   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:34.596471   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:35.097004   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:35.597125   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:36.096172   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:36.596310   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:37.097076   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:37.596611   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:38.096853   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:38.596166   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:39.097183   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:39.596275   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:40.096694   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:40.596509   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:41.096307   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:41.597095   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:42.096232   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:42.596401   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:43.096910   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:43.596980   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:07:43.747735   28167 kubeadm.go:1113] duration metric: took 11.829901645s to wait for elevateKubeSystemPrivileges
	I0803 23:07:43.747775   28167 kubeadm.go:394] duration metric: took 23.644887361s to StartCluster
	I0803 23:07:43.747795   28167 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:43.747878   28167 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:07:43.748494   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:07:43.748706   28167 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:07:43.748720   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0803 23:07:43.748727   28167 start.go:241] waiting for startup goroutines ...
	I0803 23:07:43.748734   28167 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 23:07:43.748816   28167 addons.go:69] Setting storage-provisioner=true in profile "ha-076508"
	I0803 23:07:43.748819   28167 addons.go:69] Setting default-storageclass=true in profile "ha-076508"
	I0803 23:07:43.748840   28167 addons.go:234] Setting addon storage-provisioner=true in "ha-076508"
	I0803 23:07:43.748847   28167 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-076508"
	I0803 23:07:43.748869   28167 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:07:43.748966   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:07:43.749277   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:07:43.749314   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:07:43.749279   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:07:43.749409   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:07:43.764934   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0803 23:07:43.765472   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:07:43.766055   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:07:43.766091   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:07:43.766400   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:07:43.766591   28167 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:07:43.767941   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39849
	I0803 23:07:43.768354   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:07:43.768847   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:07:43.768874   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:07:43.769002   28167 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:07:43.769192   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:07:43.769328   28167 kapi.go:59] client config for ha-076508: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key", CAFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 23:07:43.769700   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:07:43.769726   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:07:43.769842   28167 cert_rotation.go:137] Starting client certificate rotation controller
	I0803 23:07:43.770048   28167 addons.go:234] Setting addon default-storageclass=true in "ha-076508"
	I0803 23:07:43.770093   28167 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:07:43.770418   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:07:43.770447   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:07:43.785471   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44537
	I0803 23:07:43.785702   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0803 23:07:43.785978   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:07:43.786083   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:07:43.786560   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:07:43.786570   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:07:43.786588   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:07:43.786591   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:07:43.786932   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:07:43.786936   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:07:43.787181   28167 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:07:43.787518   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:07:43.787564   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:07:43.789384   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:43.791206   28167 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:07:43.792308   28167 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:07:43.792326   28167 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 23:07:43.792342   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:43.795542   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:43.796005   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:43.796037   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:43.796203   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:43.796383   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:43.796573   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:43.796741   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:07:43.802713   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I0803 23:07:43.803152   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:07:43.803606   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:07:43.803625   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:07:43.803890   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:07:43.804068   28167 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:07:43.805796   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:07:43.806001   28167 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 23:07:43.806016   28167 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 23:07:43.806033   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:07:43.808977   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:43.809430   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:07:43.809458   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:07:43.809588   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:07:43.809776   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:07:43.809939   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:07:43.810080   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:07:43.910095   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0803 23:07:43.959987   28167 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:07:43.969087   28167 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:07:44.334727   28167 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0803 23:07:44.656780   28167 main.go:141] libmachine: Making call to close driver server
	I0803 23:07:44.656802   28167 main.go:141] libmachine: (ha-076508) Calling .Close
	I0803 23:07:44.656861   28167 main.go:141] libmachine: Making call to close driver server
	I0803 23:07:44.656888   28167 main.go:141] libmachine: (ha-076508) Calling .Close
	I0803 23:07:44.657149   28167 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:07:44.657166   28167 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:07:44.657174   28167 main.go:141] libmachine: Making call to close driver server
	I0803 23:07:44.657181   28167 main.go:141] libmachine: (ha-076508) Calling .Close
	I0803 23:07:44.657194   28167 main.go:141] libmachine: (ha-076508) DBG | Closing plugin on server side
	I0803 23:07:44.657228   28167 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:07:44.657238   28167 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:07:44.657246   28167 main.go:141] libmachine: Making call to close driver server
	I0803 23:07:44.657254   28167 main.go:141] libmachine: (ha-076508) Calling .Close
	I0803 23:07:44.657385   28167 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:07:44.657406   28167 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:07:44.657511   28167 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0803 23:07:44.657523   28167 round_trippers.go:469] Request Headers:
	I0803 23:07:44.657533   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:07:44.657536   28167 main.go:141] libmachine: (ha-076508) DBG | Closing plugin on server side
	I0803 23:07:44.657540   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:07:44.657509   28167 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:07:44.657647   28167 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:07:44.676707   28167 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0803 23:07:44.677309   28167 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0803 23:07:44.677325   28167 round_trippers.go:469] Request Headers:
	I0803 23:07:44.677333   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:07:44.677337   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:07:44.677341   28167 round_trippers.go:473]     Content-Type: application/json
	I0803 23:07:44.688684   28167 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0803 23:07:44.688995   28167 main.go:141] libmachine: Making call to close driver server
	I0803 23:07:44.689011   28167 main.go:141] libmachine: (ha-076508) Calling .Close
	I0803 23:07:44.689303   28167 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:07:44.689326   28167 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:07:44.689331   28167 main.go:141] libmachine: (ha-076508) DBG | Closing plugin on server side
	I0803 23:07:44.691094   28167 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0803 23:07:44.692810   28167 addons.go:510] duration metric: took 944.073124ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0803 23:07:44.692841   28167 start.go:246] waiting for cluster config update ...
	I0803 23:07:44.692852   28167 start.go:255] writing updated cluster config ...
	I0803 23:07:44.694555   28167 out.go:177] 
	I0803 23:07:44.696127   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:07:44.696200   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:07:44.697954   28167 out.go:177] * Starting "ha-076508-m02" control-plane node in "ha-076508" cluster
	I0803 23:07:44.699690   28167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:07:44.699717   28167 cache.go:56] Caching tarball of preloaded images
	I0803 23:07:44.699806   28167 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:07:44.699819   28167 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:07:44.699882   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:07:44.700198   28167 start.go:360] acquireMachinesLock for ha-076508-m02: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:07:44.700243   28167 start.go:364] duration metric: took 25.065µs to acquireMachinesLock for "ha-076508-m02"
	I0803 23:07:44.700260   28167 start.go:93] Provisioning new machine with config: &{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:07:44.700324   28167 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0803 23:07:44.702052   28167 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:07:44.702152   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:07:44.702180   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:07:44.717054   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43381
	I0803 23:07:44.717495   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:07:44.717969   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:07:44.717991   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:07:44.718330   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:07:44.718556   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetMachineName
	I0803 23:07:44.718737   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:07:44.718937   28167 start.go:159] libmachine.API.Create for "ha-076508" (driver="kvm2")
	I0803 23:07:44.718961   28167 client.go:168] LocalClient.Create starting
	I0803 23:07:44.718999   28167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0803 23:07:44.719045   28167 main.go:141] libmachine: Decoding PEM data...
	I0803 23:07:44.719065   28167 main.go:141] libmachine: Parsing certificate...
	I0803 23:07:44.719147   28167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0803 23:07:44.719176   28167 main.go:141] libmachine: Decoding PEM data...
	I0803 23:07:44.719192   28167 main.go:141] libmachine: Parsing certificate...
	I0803 23:07:44.719212   28167 main.go:141] libmachine: Running pre-create checks...
	I0803 23:07:44.719224   28167 main.go:141] libmachine: (ha-076508-m02) Calling .PreCreateCheck
	I0803 23:07:44.719420   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetConfigRaw
	I0803 23:07:44.720368   28167 main.go:141] libmachine: Creating machine...
	I0803 23:07:44.720385   28167 main.go:141] libmachine: (ha-076508-m02) Calling .Create
	I0803 23:07:44.720530   28167 main.go:141] libmachine: (ha-076508-m02) Creating KVM machine...
	I0803 23:07:44.721969   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found existing default KVM network
	I0803 23:07:44.722090   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found existing private KVM network mk-ha-076508
	I0803 23:07:44.722265   28167 main.go:141] libmachine: (ha-076508-m02) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02 ...
	I0803 23:07:44.722292   28167 main.go:141] libmachine: (ha-076508-m02) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:07:44.722344   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:44.722250   28565 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:07:44.722432   28167 main.go:141] libmachine: (ha-076508-m02) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:07:44.959458   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:44.959322   28565 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa...
	I0803 23:07:45.050295   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:45.050161   28565 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/ha-076508-m02.rawdisk...
	I0803 23:07:45.050328   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Writing magic tar header
	I0803 23:07:45.050343   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Writing SSH key tar header
	I0803 23:07:45.050356   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:45.050266   28565 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02 ...
	I0803 23:07:45.050372   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02
	I0803 23:07:45.050421   28167 main.go:141] libmachine: (ha-076508-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02 (perms=drwx------)
	I0803 23:07:45.050443   28167 main.go:141] libmachine: (ha-076508-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:07:45.050460   28167 main.go:141] libmachine: (ha-076508-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0803 23:07:45.050483   28167 main.go:141] libmachine: (ha-076508-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0803 23:07:45.050498   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0803 23:07:45.050510   28167 main.go:141] libmachine: (ha-076508-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:07:45.050525   28167 main.go:141] libmachine: (ha-076508-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:07:45.050536   28167 main.go:141] libmachine: (ha-076508-m02) Creating domain...
	I0803 23:07:45.050552   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:07:45.050571   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0803 23:07:45.050592   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:07:45.050603   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:07:45.050617   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Checking permissions on dir: /home
	I0803 23:07:45.050628   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Skipping /home - not owner
	I0803 23:07:45.051486   28167 main.go:141] libmachine: (ha-076508-m02) define libvirt domain using xml: 
	I0803 23:07:45.051508   28167 main.go:141] libmachine: (ha-076508-m02) <domain type='kvm'>
	I0803 23:07:45.051535   28167 main.go:141] libmachine: (ha-076508-m02)   <name>ha-076508-m02</name>
	I0803 23:07:45.051556   28167 main.go:141] libmachine: (ha-076508-m02)   <memory unit='MiB'>2200</memory>
	I0803 23:07:45.051565   28167 main.go:141] libmachine: (ha-076508-m02)   <vcpu>2</vcpu>
	I0803 23:07:45.051571   28167 main.go:141] libmachine: (ha-076508-m02)   <features>
	I0803 23:07:45.051580   28167 main.go:141] libmachine: (ha-076508-m02)     <acpi/>
	I0803 23:07:45.051586   28167 main.go:141] libmachine: (ha-076508-m02)     <apic/>
	I0803 23:07:45.051596   28167 main.go:141] libmachine: (ha-076508-m02)     <pae/>
	I0803 23:07:45.051605   28167 main.go:141] libmachine: (ha-076508-m02)     
	I0803 23:07:45.051616   28167 main.go:141] libmachine: (ha-076508-m02)   </features>
	I0803 23:07:45.051626   28167 main.go:141] libmachine: (ha-076508-m02)   <cpu mode='host-passthrough'>
	I0803 23:07:45.051633   28167 main.go:141] libmachine: (ha-076508-m02)   
	I0803 23:07:45.051646   28167 main.go:141] libmachine: (ha-076508-m02)   </cpu>
	I0803 23:07:45.051674   28167 main.go:141] libmachine: (ha-076508-m02)   <os>
	I0803 23:07:45.051699   28167 main.go:141] libmachine: (ha-076508-m02)     <type>hvm</type>
	I0803 23:07:45.051716   28167 main.go:141] libmachine: (ha-076508-m02)     <boot dev='cdrom'/>
	I0803 23:07:45.051724   28167 main.go:141] libmachine: (ha-076508-m02)     <boot dev='hd'/>
	I0803 23:07:45.051742   28167 main.go:141] libmachine: (ha-076508-m02)     <bootmenu enable='no'/>
	I0803 23:07:45.051755   28167 main.go:141] libmachine: (ha-076508-m02)   </os>
	I0803 23:07:45.051764   28167 main.go:141] libmachine: (ha-076508-m02)   <devices>
	I0803 23:07:45.051774   28167 main.go:141] libmachine: (ha-076508-m02)     <disk type='file' device='cdrom'>
	I0803 23:07:45.051790   28167 main.go:141] libmachine: (ha-076508-m02)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/boot2docker.iso'/>
	I0803 23:07:45.051801   28167 main.go:141] libmachine: (ha-076508-m02)       <target dev='hdc' bus='scsi'/>
	I0803 23:07:45.051810   28167 main.go:141] libmachine: (ha-076508-m02)       <readonly/>
	I0803 23:07:45.051817   28167 main.go:141] libmachine: (ha-076508-m02)     </disk>
	I0803 23:07:45.051827   28167 main.go:141] libmachine: (ha-076508-m02)     <disk type='file' device='disk'>
	I0803 23:07:45.051839   28167 main.go:141] libmachine: (ha-076508-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:07:45.051868   28167 main.go:141] libmachine: (ha-076508-m02)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/ha-076508-m02.rawdisk'/>
	I0803 23:07:45.051878   28167 main.go:141] libmachine: (ha-076508-m02)       <target dev='hda' bus='virtio'/>
	I0803 23:07:45.051890   28167 main.go:141] libmachine: (ha-076508-m02)     </disk>
	I0803 23:07:45.051901   28167 main.go:141] libmachine: (ha-076508-m02)     <interface type='network'>
	I0803 23:07:45.051913   28167 main.go:141] libmachine: (ha-076508-m02)       <source network='mk-ha-076508'/>
	I0803 23:07:45.051920   28167 main.go:141] libmachine: (ha-076508-m02)       <model type='virtio'/>
	I0803 23:07:45.051930   28167 main.go:141] libmachine: (ha-076508-m02)     </interface>
	I0803 23:07:45.051941   28167 main.go:141] libmachine: (ha-076508-m02)     <interface type='network'>
	I0803 23:07:45.051951   28167 main.go:141] libmachine: (ha-076508-m02)       <source network='default'/>
	I0803 23:07:45.051962   28167 main.go:141] libmachine: (ha-076508-m02)       <model type='virtio'/>
	I0803 23:07:45.051972   28167 main.go:141] libmachine: (ha-076508-m02)     </interface>
	I0803 23:07:45.051979   28167 main.go:141] libmachine: (ha-076508-m02)     <serial type='pty'>
	I0803 23:07:45.051990   28167 main.go:141] libmachine: (ha-076508-m02)       <target port='0'/>
	I0803 23:07:45.051999   28167 main.go:141] libmachine: (ha-076508-m02)     </serial>
	I0803 23:07:45.052008   28167 main.go:141] libmachine: (ha-076508-m02)     <console type='pty'>
	I0803 23:07:45.052018   28167 main.go:141] libmachine: (ha-076508-m02)       <target type='serial' port='0'/>
	I0803 23:07:45.052029   28167 main.go:141] libmachine: (ha-076508-m02)     </console>
	I0803 23:07:45.052037   28167 main.go:141] libmachine: (ha-076508-m02)     <rng model='virtio'>
	I0803 23:07:45.052054   28167 main.go:141] libmachine: (ha-076508-m02)       <backend model='random'>/dev/random</backend>
	I0803 23:07:45.052064   28167 main.go:141] libmachine: (ha-076508-m02)     </rng>
	I0803 23:07:45.052071   28167 main.go:141] libmachine: (ha-076508-m02)     
	I0803 23:07:45.052081   28167 main.go:141] libmachine: (ha-076508-m02)     
	I0803 23:07:45.052093   28167 main.go:141] libmachine: (ha-076508-m02)   </devices>
	I0803 23:07:45.052112   28167 main.go:141] libmachine: (ha-076508-m02) </domain>
	I0803 23:07:45.052127   28167 main.go:141] libmachine: (ha-076508-m02) 
	I0803 23:07:45.058836   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:88:e9:4e in network default
	I0803 23:07:45.059428   28167 main.go:141] libmachine: (ha-076508-m02) Ensuring networks are active...
	I0803 23:07:45.059451   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:45.060133   28167 main.go:141] libmachine: (ha-076508-m02) Ensuring network default is active
	I0803 23:07:45.060527   28167 main.go:141] libmachine: (ha-076508-m02) Ensuring network mk-ha-076508 is active
	I0803 23:07:45.061091   28167 main.go:141] libmachine: (ha-076508-m02) Getting domain xml...
	I0803 23:07:45.061900   28167 main.go:141] libmachine: (ha-076508-m02) Creating domain...
	I0803 23:07:46.276718   28167 main.go:141] libmachine: (ha-076508-m02) Waiting to get IP...
	I0803 23:07:46.277542   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:46.278077   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:46.278117   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:46.278049   28565 retry.go:31] will retry after 262.095555ms: waiting for machine to come up
	I0803 23:07:46.541381   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:46.541763   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:46.541789   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:46.541712   28565 retry.go:31] will retry after 322.506254ms: waiting for machine to come up
	I0803 23:07:46.866323   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:46.866715   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:46.866743   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:46.866674   28565 retry.go:31] will retry after 306.839411ms: waiting for machine to come up
	I0803 23:07:47.175280   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:47.175727   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:47.175763   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:47.175683   28565 retry.go:31] will retry after 405.983973ms: waiting for machine to come up
	I0803 23:07:47.583154   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:47.583682   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:47.583730   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:47.583657   28565 retry.go:31] will retry after 521.558917ms: waiting for machine to come up
	I0803 23:07:48.106472   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:48.107190   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:48.107239   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:48.107172   28565 retry.go:31] will retry after 677.724945ms: waiting for machine to come up
	I0803 23:07:48.786099   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:48.786576   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:48.786603   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:48.786530   28565 retry.go:31] will retry after 1.054768836s: waiting for machine to come up
	I0803 23:07:49.843130   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:49.843542   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:49.843570   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:49.843501   28565 retry.go:31] will retry after 1.195620314s: waiting for machine to come up
	I0803 23:07:51.040530   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:51.040986   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:51.041015   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:51.040950   28565 retry.go:31] will retry after 1.178141721s: waiting for machine to come up
	I0803 23:07:52.220851   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:52.221283   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:52.221303   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:52.221240   28565 retry.go:31] will retry after 1.497880009s: waiting for machine to come up
	I0803 23:07:53.720867   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:53.721329   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:53.721347   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:53.721293   28565 retry.go:31] will retry after 1.77773676s: waiting for machine to come up
	I0803 23:07:55.500605   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:55.501010   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:55.501038   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:55.500959   28565 retry.go:31] will retry after 2.214448382s: waiting for machine to come up
	I0803 23:07:57.718319   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:07:57.718692   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:07:57.718714   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:07:57.718662   28565 retry.go:31] will retry after 3.914237089s: waiting for machine to come up
	I0803 23:08:01.634618   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:01.635117   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find current IP address of domain ha-076508-m02 in network mk-ha-076508
	I0803 23:08:01.635141   28167 main.go:141] libmachine: (ha-076508-m02) DBG | I0803 23:08:01.635088   28565 retry.go:31] will retry after 5.603783961s: waiting for machine to come up
	I0803 23:08:07.242373   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.242842   28167 main.go:141] libmachine: (ha-076508-m02) Found IP for machine: 192.168.39.245
	I0803 23:08:07.242864   28167 main.go:141] libmachine: (ha-076508-m02) Reserving static IP address...
	I0803 23:08:07.242875   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has current primary IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.243390   28167 main.go:141] libmachine: (ha-076508-m02) DBG | unable to find host DHCP lease matching {name: "ha-076508-m02", mac: "52:54:00:d6:c8:3b", ip: "192.168.39.245"} in network mk-ha-076508
	I0803 23:08:07.318237   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Getting to WaitForSSH function...
	I0803 23:08:07.318264   28167 main.go:141] libmachine: (ha-076508-m02) Reserved static IP address: 192.168.39.245
	I0803 23:08:07.318276   28167 main.go:141] libmachine: (ha-076508-m02) Waiting for SSH to be available...
	I0803 23:08:07.320887   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.321294   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.321335   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.321495   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Using SSH client type: external
	I0803 23:08:07.321520   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa (-rw-------)
	I0803 23:08:07.321580   28167 main.go:141] libmachine: (ha-076508-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:08:07.321597   28167 main.go:141] libmachine: (ha-076508-m02) DBG | About to run SSH command:
	I0803 23:08:07.321611   28167 main.go:141] libmachine: (ha-076508-m02) DBG | exit 0
	I0803 23:08:07.449726   28167 main.go:141] libmachine: (ha-076508-m02) DBG | SSH cmd err, output: <nil>: 
	I0803 23:08:07.450029   28167 main.go:141] libmachine: (ha-076508-m02) KVM machine creation complete!
	I0803 23:08:07.450332   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetConfigRaw
	I0803 23:08:07.450872   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:07.451077   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:07.451231   28167 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:08:07.451246   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:08:07.452553   28167 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:08:07.452582   28167 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:08:07.452591   28167 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:08:07.452602   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:07.456057   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.456440   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.456469   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.456619   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:07.456771   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.456945   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.457072   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:07.457217   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:08:07.457425   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0803 23:08:07.457437   28167 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:08:07.569167   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:08:07.569193   28167 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:08:07.569201   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:07.572136   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.572528   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.572564   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.572661   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:07.572865   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.573051   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.573166   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:07.573304   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:08:07.573486   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0803 23:08:07.573500   28167 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:08:07.686386   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:08:07.686465   28167 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:08:07.686475   28167 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:08:07.686483   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetMachineName
	I0803 23:08:07.686788   28167 buildroot.go:166] provisioning hostname "ha-076508-m02"
	I0803 23:08:07.686813   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetMachineName
	I0803 23:08:07.686996   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:07.689797   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.690234   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.690263   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.690392   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:07.690568   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.690732   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.690876   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:07.691015   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:08:07.691183   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0803 23:08:07.691194   28167 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076508-m02 && echo "ha-076508-m02" | sudo tee /etc/hostname
	I0803 23:08:07.821783   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076508-m02
	
	I0803 23:08:07.821812   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:07.824483   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.824819   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.824847   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.825031   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:07.825247   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.825426   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:07.825583   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:07.825742   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:08:07.825960   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0803 23:08:07.825985   28167 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076508-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076508-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076508-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:08:07.947012   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:08:07.947045   28167 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 23:08:07.947074   28167 buildroot.go:174] setting up certificates
	I0803 23:08:07.947094   28167 provision.go:84] configureAuth start
	I0803 23:08:07.947113   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetMachineName
	I0803 23:08:07.947425   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:08:07.950324   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.950751   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.950783   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.950933   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:07.953130   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.953512   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:07.953540   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:07.953681   28167 provision.go:143] copyHostCerts
	I0803 23:08:07.953715   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:08:07.953753   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0803 23:08:07.953762   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:08:07.953831   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 23:08:07.953906   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:08:07.953923   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0803 23:08:07.953930   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:08:07.953955   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 23:08:07.953996   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:08:07.954014   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0803 23:08:07.954020   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:08:07.954042   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 23:08:07.954094   28167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.ha-076508-m02 san=[127.0.0.1 192.168.39.245 ha-076508-m02 localhost minikube]
	I0803 23:08:08.317485   28167 provision.go:177] copyRemoteCerts
	I0803 23:08:08.317547   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:08:08.317575   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:08.320596   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.321034   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:08.321069   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.321246   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:08.321435   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:08.321635   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:08.321758   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	I0803 23:08:08.408235   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:08:08.408314   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 23:08:08.434966   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:08:08.435037   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0803 23:08:08.463764   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:08:08.463842   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0803 23:08:08.489067   28167 provision.go:87] duration metric: took 541.95512ms to configureAuth
	I0803 23:08:08.489096   28167 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:08:08.489277   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:08:08.489379   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:08.492019   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.492394   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:08.492424   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.492539   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:08.492704   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:08.492790   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:08.492891   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:08.493040   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:08:08.493192   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0803 23:08:08.493205   28167 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:08:08.775774   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:08:08.775827   28167 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:08:08.775839   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetURL
	I0803 23:08:08.777262   28167 main.go:141] libmachine: (ha-076508-m02) DBG | Using libvirt version 6000000
	I0803 23:08:08.779496   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.779845   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:08.779872   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.780041   28167 main.go:141] libmachine: Docker is up and running!
	I0803 23:08:08.780059   28167 main.go:141] libmachine: Reticulating splines...
	I0803 23:08:08.780067   28167 client.go:171] duration metric: took 24.061098594s to LocalClient.Create
	I0803 23:08:08.780094   28167 start.go:167] duration metric: took 24.061158189s to libmachine.API.Create "ha-076508"
	I0803 23:08:08.780106   28167 start.go:293] postStartSetup for "ha-076508-m02" (driver="kvm2")
	I0803 23:08:08.780118   28167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:08:08.780149   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:08.780381   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:08:08.780402   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:08.782577   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.782870   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:08.782900   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.783049   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:08.783239   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:08.783399   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:08.783516   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	I0803 23:08:08.868965   28167 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:08:08.873427   28167 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:08:08.873452   28167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 23:08:08.873536   28167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 23:08:08.873636   28167 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0803 23:08:08.873650   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0803 23:08:08.873765   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:08:08.883854   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:08:08.910258   28167 start.go:296] duration metric: took 130.136737ms for postStartSetup
	I0803 23:08:08.910312   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetConfigRaw
	I0803 23:08:08.910868   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:08:08.913571   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.913868   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:08.913897   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.914128   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:08:08.914308   28167 start.go:128] duration metric: took 24.213972239s to createHost
	I0803 23:08:08.914329   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:08.916673   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.917132   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:08.917157   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:08.917315   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:08.917547   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:08.917684   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:08.917792   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:08.918110   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:08:08.918320   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0803 23:08:08.918335   28167 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:08:09.030445   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722726489.006665611
	
	I0803 23:08:09.030465   28167 fix.go:216] guest clock: 1722726489.006665611
	I0803 23:08:09.030473   28167 fix.go:229] Guest: 2024-08-03 23:08:09.006665611 +0000 UTC Remote: 2024-08-03 23:08:08.914318937 +0000 UTC m=+81.457259917 (delta=92.346674ms)
	I0803 23:08:09.030488   28167 fix.go:200] guest clock delta is within tolerance: 92.346674ms
	I0803 23:08:09.030493   28167 start.go:83] releasing machines lock for "ha-076508-m02", held for 24.330240912s
	I0803 23:08:09.030510   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:09.030890   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:08:09.033519   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:09.034038   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:09.034068   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:09.036570   28167 out.go:177] * Found network options:
	I0803 23:08:09.038141   28167 out.go:177]   - NO_PROXY=192.168.39.154
	W0803 23:08:09.039650   28167 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:08:09.039686   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:09.040356   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:09.040522   28167 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:08:09.040590   28167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:08:09.040631   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	W0803 23:08:09.040711   28167 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:08:09.040784   28167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:08:09.040816   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:08:09.043490   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:09.043734   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:09.043905   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:09.043935   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:09.044087   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:09.044106   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:09.044121   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:09.044312   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:08:09.044325   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:09.044532   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:09.044534   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:08:09.044698   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:08:09.044739   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	I0803 23:08:09.044867   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	I0803 23:08:09.282944   28167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:08:09.289754   28167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:08:09.289860   28167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:08:09.306644   28167 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:08:09.306665   28167 start.go:495] detecting cgroup driver to use...
	I0803 23:08:09.306719   28167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:08:09.323473   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:08:09.338325   28167 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:08:09.338398   28167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:08:09.354671   28167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:08:09.371514   28167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:08:09.490414   28167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:08:09.633233   28167 docker.go:233] disabling docker service ...
	I0803 23:08:09.633307   28167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:08:09.649648   28167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:08:09.663216   28167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:08:09.798744   28167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:08:09.933183   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:08:09.948876   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:08:09.968963   28167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:08:09.969030   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:09.980877   28167 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:08:09.980937   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:09.992527   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:10.003373   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:10.013679   28167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:08:10.024067   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:10.034653   28167 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:10.053928   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:08:10.066025   28167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:08:10.076716   28167 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:08:10.076785   28167 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:08:10.091227   28167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:08:10.101389   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:08:10.219495   28167 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:08:10.364061   28167 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:08:10.364144   28167 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:08:10.370226   28167 start.go:563] Will wait 60s for crictl version
	I0803 23:08:10.370294   28167 ssh_runner.go:195] Run: which crictl
	I0803 23:08:10.374289   28167 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:08:10.418729   28167 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:08:10.418821   28167 ssh_runner.go:195] Run: crio --version
	I0803 23:08:10.448365   28167 ssh_runner.go:195] Run: crio --version
	I0803 23:08:10.480036   28167 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:08:10.481727   28167 out.go:177]   - env NO_PROXY=192.168.39.154
	I0803 23:08:10.483057   28167 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:08:10.486017   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:10.486299   28167 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:59 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:08:10.486319   28167 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:08:10.486557   28167 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:08:10.490779   28167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:08:10.503703   28167 mustload.go:65] Loading cluster: ha-076508
	I0803 23:08:10.503952   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:08:10.504210   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:08:10.504235   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:08:10.518805   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39449
	I0803 23:08:10.519287   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:08:10.519717   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:08:10.519738   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:08:10.520123   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:08:10.520329   28167 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:08:10.522069   28167 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:08:10.522343   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:08:10.522370   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:08:10.537123   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39771
	I0803 23:08:10.537555   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:08:10.537989   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:08:10.538007   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:08:10.538315   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:08:10.538493   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:08:10.538716   28167 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508 for IP: 192.168.39.245
	I0803 23:08:10.538728   28167 certs.go:194] generating shared ca certs ...
	I0803 23:08:10.538742   28167 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:08:10.538878   28167 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 23:08:10.538934   28167 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 23:08:10.538947   28167 certs.go:256] generating profile certs ...
	I0803 23:08:10.539044   28167 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key
	I0803 23:08:10.539081   28167 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.7ef0ec72
	I0803 23:08:10.539103   28167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.7ef0ec72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.154 192.168.39.245 192.168.39.254]
	I0803 23:08:10.607588   28167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.7ef0ec72 ...
	I0803 23:08:10.607617   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.7ef0ec72: {Name:mk5470fdf54109f9a0315f27866a337c16f70579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:08:10.607797   28167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.7ef0ec72 ...
	I0803 23:08:10.607819   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.7ef0ec72: {Name:mkb13d3af1c57c46674af59886c41467b9704ffb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:08:10.607915   28167 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.7ef0ec72 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt
	I0803 23:08:10.608041   28167 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.7ef0ec72 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key
	I0803 23:08:10.608163   28167 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key
	I0803 23:08:10.608177   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:08:10.608190   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:08:10.608201   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:08:10.608211   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:08:10.608220   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:08:10.608231   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:08:10.608241   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:08:10.608253   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:08:10.608299   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0803 23:08:10.608361   28167 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0803 23:08:10.608373   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 23:08:10.608398   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 23:08:10.608421   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:08:10.608442   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 23:08:10.608475   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:08:10.608498   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:08:10.608513   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0803 23:08:10.608525   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0803 23:08:10.608555   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:08:10.611493   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:08:10.611839   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:08:10.611865   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:08:10.612030   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:08:10.612229   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:08:10.612424   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:08:10.612561   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:08:10.697777   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0803 23:08:10.703743   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0803 23:08:10.717864   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0803 23:08:10.722732   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0803 23:08:10.733619   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0803 23:08:10.738226   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0803 23:08:10.751783   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0803 23:08:10.756588   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0803 23:08:10.770950   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0803 23:08:10.775965   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0803 23:08:10.788447   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0803 23:08:10.793802   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0803 23:08:10.809142   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:08:10.835911   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:08:10.862796   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:08:10.892760   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 23:08:10.920294   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0803 23:08:10.945621   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 23:08:10.971720   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:08:10.997989   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:08:11.022543   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:08:11.048466   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0803 23:08:11.074385   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0803 23:08:11.098355   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0803 23:08:11.116658   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0803 23:08:11.133799   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0803 23:08:11.150804   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0803 23:08:11.169616   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0803 23:08:11.186653   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0803 23:08:11.204123   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0803 23:08:11.221204   28167 ssh_runner.go:195] Run: openssl version
	I0803 23:08:11.227629   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:08:11.239234   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:08:11.243933   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:08:11.243986   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:08:11.250720   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:08:11.262577   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0803 23:08:11.275722   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0803 23:08:11.280617   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:08:11.280683   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0803 23:08:11.286720   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0803 23:08:11.299605   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0803 23:08:11.312849   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0803 23:08:11.317867   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:08:11.317911   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0803 23:08:11.323881   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:08:11.335450   28167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:08:11.340056   28167 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:08:11.340113   28167 kubeadm.go:934] updating node {m02 192.168.39.245 8443 v1.30.3 crio true true} ...
	I0803 23:08:11.340191   28167 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076508-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:08:11.340217   28167 kube-vip.go:115] generating kube-vip config ...
	I0803 23:08:11.340258   28167 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:08:11.362090   28167 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:08:11.362155   28167 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:08:11.362223   28167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:08:11.375039   28167 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0803 23:08:11.375130   28167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0803 23:08:11.387416   28167 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0803 23:08:11.387444   28167 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0803 23:08:11.387471   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:08:11.387532   28167 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0803 23:08:11.387591   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:08:11.392212   28167 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0803 23:08:11.392238   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0803 23:08:12.659618   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:08:12.659709   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:08:12.665019   28167 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0803 23:08:12.665074   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0803 23:09:21.509046   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:09:21.526148   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:09:21.526234   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:09:21.530687   28167 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0803 23:09:21.530722   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0803 23:09:21.944143   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0803 23:09:21.953877   28167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0803 23:09:21.970785   28167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:09:21.988574   28167 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0803 23:09:22.006385   28167 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:09:22.010604   28167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:09:22.022805   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:09:22.146860   28167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:09:22.163978   28167 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:09:22.164299   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:09:22.164333   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:09:22.179458   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39883
	I0803 23:09:22.179882   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:09:22.180310   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:09:22.180336   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:09:22.180630   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:09:22.180826   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:09:22.180979   28167 start.go:317] joinCluster: &{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:09:22.181096   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0803 23:09:22.181112   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:09:22.183955   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:09:22.184367   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:09:22.184398   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:09:22.184513   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:09:22.184682   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:09:22.184818   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:09:22.184940   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:09:22.356179   28167 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:09:22.356218   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1433o5.s4u1fkuqzly79dfp --discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076508-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443"
	I0803 23:09:43.804396   28167 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1433o5.s4u1fkuqzly79dfp --discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076508-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443": (21.448151925s)
	I0803 23:09:43.804432   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0803 23:09:44.423119   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076508-m02 minikube.k8s.io/updated_at=2024_08_03T23_09_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=ha-076508 minikube.k8s.io/primary=false
	I0803 23:09:44.562392   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-076508-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0803 23:09:44.678628   28167 start.go:319] duration metric: took 22.497645294s to joinCluster
	I0803 23:09:44.678700   28167 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:09:44.679030   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:09:44.680593   28167 out.go:177] * Verifying Kubernetes components...
	I0803 23:09:44.682197   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:09:44.987445   28167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:09:45.056753   28167 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:09:45.056960   28167 kapi.go:59] client config for ha-076508: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key", CAFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0803 23:09:45.057011   28167 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.154:8443
	I0803 23:09:45.057196   28167 node_ready.go:35] waiting up to 6m0s for node "ha-076508-m02" to be "Ready" ...
	I0803 23:09:45.057288   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:45.057296   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:45.057303   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:45.057309   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:45.068701   28167 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0803 23:09:45.557385   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:45.557406   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:45.557414   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:45.557418   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:45.561991   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:46.058061   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:46.058087   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:46.058098   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:46.058106   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:46.064936   28167 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:09:46.558430   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:46.558457   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:46.558465   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:46.558468   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:46.562544   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:47.057830   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:47.057852   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:47.057860   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:47.057866   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:47.062460   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:47.063462   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:09:47.558007   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:47.558036   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:47.558049   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:47.558054   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:47.561647   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:48.058234   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:48.058254   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:48.058265   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:48.058271   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:48.061711   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:48.557523   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:48.557546   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:48.557557   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:48.557561   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:48.562001   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:49.057788   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:49.057821   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:49.057835   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:49.057840   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:49.061291   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:49.558036   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:49.558056   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:49.558065   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:49.558068   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:49.562374   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:49.562984   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:09:50.057836   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:50.057860   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:50.057869   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:50.057874   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:50.061458   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:50.558217   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:50.558239   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:50.558247   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:50.558251   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:50.562002   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:51.058208   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:51.058234   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:51.058244   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:51.058251   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:51.061560   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:51.557746   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:51.557770   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:51.557782   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:51.557787   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:51.562369   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:52.057915   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:52.057932   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:52.057940   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:52.057945   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:52.060865   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:09:52.061677   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:09:52.557598   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:52.557628   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:52.557643   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:52.557649   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:52.562479   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:53.058196   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:53.058224   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:53.058236   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:53.058243   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:53.067540   28167 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0803 23:09:53.557515   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:53.557533   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:53.557541   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:53.557546   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:53.560896   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:54.058431   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:54.058454   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:54.058463   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:54.058468   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:54.061946   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:54.062505   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:09:54.558011   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:54.558037   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:54.558049   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:54.558057   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:54.562411   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:55.057939   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:55.057962   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:55.057972   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:55.057983   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:55.060619   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:09:55.558029   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:55.558052   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:55.558064   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:55.558071   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:55.562338   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:09:56.058362   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:56.058383   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:56.058394   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:56.058401   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:56.062018   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:56.557862   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:56.557891   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:56.557899   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:56.557903   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:56.561601   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:56.562123   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:09:57.057472   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:57.057492   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:57.057500   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:57.057505   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:57.061083   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:57.557892   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:57.557915   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:57.557924   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:57.557928   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:57.561449   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:58.058028   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:58.058056   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:58.058069   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:58.058074   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:58.061875   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:58.558022   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:58.558044   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:58.558052   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:58.558056   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:58.561975   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:09:58.562554   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:09:59.057981   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:59.058004   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:59.058015   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:59.058021   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:59.063360   28167 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:09:59.557496   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:09:59.557520   28167 round_trippers.go:469] Request Headers:
	I0803 23:09:59.557530   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:09:59.557536   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:09:59.560887   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:00.058260   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:00.058289   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:00.058299   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:00.058303   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:00.062911   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:00.558439   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:00.558465   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:00.558475   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:00.558480   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:00.563951   28167 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:10:00.564494   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:10:01.057823   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:01.057847   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:01.057858   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:01.057863   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:01.061961   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:01.557524   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:01.557547   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:01.557557   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:01.557564   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:01.560766   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:02.057852   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:02.057880   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:02.057892   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:02.057897   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:02.061338   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:02.558420   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:02.558441   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:02.558451   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:02.558457   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:02.562393   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:03.058015   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:03.058041   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.058050   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.058054   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.061815   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:03.062300   28167 node_ready.go:53] node "ha-076508-m02" has status "Ready":"False"
	I0803 23:10:03.557672   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:03.557695   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.557703   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.557708   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.561161   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:03.562034   28167 node_ready.go:49] node "ha-076508-m02" has status "Ready":"True"
	I0803 23:10:03.562054   28167 node_ready.go:38] duration metric: took 18.504824963s for node "ha-076508-m02" to be "Ready" ...
	I0803 23:10:03.562070   28167 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:10:03.562135   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:10:03.562144   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.562151   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.562155   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.567869   28167 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:10:03.574443   28167 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g4nns" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.574536   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-g4nns
	I0803 23:10:03.574555   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.574567   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.574577   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.577857   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:03.578621   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:03.578637   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.578645   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.578650   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.581732   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:03.582382   28167 pod_ready.go:92] pod "coredns-7db6d8ff4d-g4nns" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:03.582398   28167 pod_ready.go:81] duration metric: took 7.929465ms for pod "coredns-7db6d8ff4d-g4nns" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.582407   28167 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm52b" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.582456   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jm52b
	I0803 23:10:03.582463   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.582470   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.582475   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.585043   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:10:03.585739   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:03.585754   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.585762   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.585767   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.587887   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:10:03.588453   28167 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm52b" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:03.588474   28167 pod_ready.go:81] duration metric: took 6.06048ms for pod "coredns-7db6d8ff4d-jm52b" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.588485   28167 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.588549   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508
	I0803 23:10:03.588559   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.588569   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.588576   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.590791   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:10:03.591475   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:03.591492   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.591504   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.591510   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.593701   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:10:03.594240   28167 pod_ready.go:92] pod "etcd-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:03.594262   28167 pod_ready.go:81] duration metric: took 5.764629ms for pod "etcd-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.594273   28167 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.594321   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508-m02
	I0803 23:10:03.594328   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.594335   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.594339   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.598557   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:03.599422   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:03.599434   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.599441   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.599450   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.601740   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:10:03.604216   28167 pod_ready.go:92] pod "etcd-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:03.604234   28167 pod_ready.go:81] duration metric: took 9.953932ms for pod "etcd-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.604253   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.757799   28167 request.go:629] Waited for 153.482159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508
	I0803 23:10:03.757862   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508
	I0803 23:10:03.757867   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.757875   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.757879   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.761560   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:03.958391   28167 request.go:629] Waited for 196.043448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:03.958441   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:03.958446   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:03.958454   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:03.958458   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:03.962496   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:03.963132   28167 pod_ready.go:92] pod "kube-apiserver-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:03.963151   28167 pod_ready.go:81] duration metric: took 358.889806ms for pod "kube-apiserver-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:03.963165   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:04.158373   28167 request.go:629] Waited for 195.12224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508-m02
	I0803 23:10:04.158439   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508-m02
	I0803 23:10:04.158445   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:04.158456   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:04.158461   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:04.161999   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:04.358081   28167 request.go:629] Waited for 195.407692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:04.358137   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:04.358142   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:04.358150   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:04.358154   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:04.361889   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:04.362665   28167 pod_ready.go:92] pod "kube-apiserver-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:04.362686   28167 pod_ready.go:81] duration metric: took 399.512992ms for pod "kube-apiserver-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:04.362696   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:04.557711   28167 request.go:629] Waited for 194.942202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508
	I0803 23:10:04.557780   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508
	I0803 23:10:04.557786   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:04.557795   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:04.557802   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:04.567075   28167 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0803 23:10:04.758119   28167 request.go:629] Waited for 190.367024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:04.758196   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:04.758202   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:04.758211   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:04.758215   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:04.762702   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:04.763905   28167 pod_ready.go:92] pod "kube-controller-manager-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:04.763925   28167 pod_ready.go:81] duration metric: took 401.222332ms for pod "kube-controller-manager-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:04.763938   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:04.958067   28167 request.go:629] Waited for 194.05371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508-m02
	I0803 23:10:04.958157   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508-m02
	I0803 23:10:04.958165   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:04.958180   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:04.958190   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:04.961483   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:05.158538   28167 request.go:629] Waited for 196.325518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:05.158588   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:05.158593   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:05.158602   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:05.158605   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:05.161325   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:10:05.161730   28167 pod_ready.go:92] pod "kube-controller-manager-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:05.161749   28167 pod_ready.go:81] duration metric: took 397.803013ms for pod "kube-controller-manager-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:05.161761   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hkfgl" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:05.358005   28167 request.go:629] Waited for 196.170756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkfgl
	I0803 23:10:05.358086   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkfgl
	I0803 23:10:05.358095   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:05.358102   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:05.358112   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:05.362136   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:05.558654   28167 request.go:629] Waited for 195.840812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:05.558704   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:05.558709   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:05.558717   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:05.558723   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:05.562855   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:05.563314   28167 pod_ready.go:92] pod "kube-proxy-hkfgl" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:05.563331   28167 pod_ready.go:81] duration metric: took 401.562684ms for pod "kube-proxy-hkfgl" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:05.563343   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jvj96" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:05.758434   28167 request.go:629] Waited for 195.023596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jvj96
	I0803 23:10:05.758521   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jvj96
	I0803 23:10:05.758537   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:05.758548   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:05.758557   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:05.762220   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:05.958165   28167 request.go:629] Waited for 195.399403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:05.958223   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:05.958228   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:05.958236   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:05.958241   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:05.962239   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:05.963167   28167 pod_ready.go:92] pod "kube-proxy-jvj96" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:05.963185   28167 pod_ready.go:81] duration metric: took 399.834576ms for pod "kube-proxy-jvj96" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:05.963194   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:06.158316   28167 request.go:629] Waited for 195.044863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508
	I0803 23:10:06.158376   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508
	I0803 23:10:06.158381   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:06.158389   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:06.158394   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:06.170042   28167 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0803 23:10:06.357901   28167 request.go:629] Waited for 187.300794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:06.357960   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:10:06.357965   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:06.357972   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:06.357976   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:06.361223   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:06.361766   28167 pod_ready.go:92] pod "kube-scheduler-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:06.361788   28167 pod_ready.go:81] duration metric: took 398.588522ms for pod "kube-scheduler-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:06.361798   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:06.557911   28167 request.go:629] Waited for 196.032404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508-m02
	I0803 23:10:06.557969   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508-m02
	I0803 23:10:06.557975   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:06.557983   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:06.557991   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:06.561105   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:06.758066   28167 request.go:629] Waited for 196.362667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:06.758138   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:10:06.758143   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:06.758152   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:06.758157   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:06.762072   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:06.762508   28167 pod_ready.go:92] pod "kube-scheduler-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:10:06.762526   28167 pod_ready.go:81] duration metric: took 400.722781ms for pod "kube-scheduler-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:10:06.762536   28167 pod_ready.go:38] duration metric: took 3.200448227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:10:06.762563   28167 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:10:06.762634   28167 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:10:06.784072   28167 api_server.go:72] duration metric: took 22.105332742s to wait for apiserver process to appear ...
	I0803 23:10:06.784107   28167 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:10:06.784132   28167 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I0803 23:10:06.788410   28167 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
	ok
	I0803 23:10:06.788476   28167 round_trippers.go:463] GET https://192.168.39.154:8443/version
	I0803 23:10:06.788484   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:06.788492   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:06.788495   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:06.789307   28167 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0803 23:10:06.789427   28167 api_server.go:141] control plane version: v1.30.3
	I0803 23:10:06.789445   28167 api_server.go:131] duration metric: took 5.331655ms to wait for apiserver health ...
	I0803 23:10:06.789454   28167 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 23:10:06.957795   28167 request.go:629] Waited for 168.278061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:10:06.957878   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:10:06.957884   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:06.957891   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:06.957895   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:06.965395   28167 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0803 23:10:06.969834   28167 system_pods.go:59] 17 kube-system pods found
	I0803 23:10:06.969868   28167 system_pods.go:61] "coredns-7db6d8ff4d-g4nns" [1c9c7190-c993-4b51-8ba6-62e3ab513836] Running
	I0803 23:10:06.969874   28167 system_pods.go:61] "coredns-7db6d8ff4d-jm52b" [65abad67-6b05-4dbb-8d33-723306bee46f] Running
	I0803 23:10:06.969878   28167 system_pods.go:61] "etcd-ha-076508" [0d38d9a9-4f0f-4928-bd37-010dc1b7623e] Running
	I0803 23:10:06.969882   28167 system_pods.go:61] "etcd-ha-076508-m02" [b473f99f-7b7c-42a2-affc-69b5305ae2e2] Running
	I0803 23:10:06.969885   28167 system_pods.go:61] "kindnet-bpdht" [156017b0-941c-4b32-a73c-4798d48e5434] Running
	I0803 23:10:06.969888   28167 system_pods.go:61] "kindnet-kw254" [fd80828b-1c0f-4a0d-a5d0-f25501e65fd9] Running
	I0803 23:10:06.969892   28167 system_pods.go:61] "kube-apiserver-ha-076508" [975ea5b3-4598-438a-99c6-8c2b644a714b] Running
	I0803 23:10:06.969895   28167 system_pods.go:61] "kube-apiserver-ha-076508-m02" [fdaa8b75-c8a4-444c-9288-6aaec5b31834] Running
	I0803 23:10:06.969898   28167 system_pods.go:61] "kube-controller-manager-ha-076508" [3517b4d5-b6b3-4d39-9f4a-1b8c0ceae246] Running
	I0803 23:10:06.969901   28167 system_pods.go:61] "kube-controller-manager-ha-076508-m02" [f13130bb-619b-475f-ab9d-61422ca1a08b] Running
	I0803 23:10:06.969903   28167 system_pods.go:61] "kube-proxy-hkfgl" [31dca27d-663b-4bfa-8921-547686985835] Running
	I0803 23:10:06.969906   28167 system_pods.go:61] "kube-proxy-jvj96" [cdb6273b-31a8-48bc-8c5a-010363fc2a96] Running
	I0803 23:10:06.969909   28167 system_pods.go:61] "kube-scheduler-ha-076508" [63e9b52f-c7e8-4812-a666-284b2d383067] Running
	I0803 23:10:06.969911   28167 system_pods.go:61] "kube-scheduler-ha-076508-m02" [47cb368b-42e7-44f0-b1b1-40521064569b] Running
	I0803 23:10:06.969914   28167 system_pods.go:61] "kube-vip-ha-076508" [f0640d14-d8df-4fe5-8265-4f1215c2e881] Running
	I0803 23:10:06.969917   28167 system_pods.go:61] "kube-vip-ha-076508-m02" [0e1a3c8d-c1d4-4c29-b674-f13a62d2471c] Running
	I0803 23:10:06.969919   28167 system_pods.go:61] "storage-provisioner" [c98f9062-eff5-48e1-b260-7e8acf8df124] Running
	I0803 23:10:06.969925   28167 system_pods.go:74] duration metric: took 180.464708ms to wait for pod list to return data ...
	I0803 23:10:06.969933   28167 default_sa.go:34] waiting for default service account to be created ...
	I0803 23:10:07.158393   28167 request.go:629] Waited for 188.390565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:10:07.158466   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:10:07.158476   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:07.158487   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:07.158496   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:07.163254   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:07.163599   28167 default_sa.go:45] found service account: "default"
	I0803 23:10:07.163624   28167 default_sa.go:55] duration metric: took 193.683724ms for default service account to be created ...
	I0803 23:10:07.163635   28167 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 23:10:07.358108   28167 request.go:629] Waited for 194.40227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:10:07.358184   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:10:07.358190   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:07.358197   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:07.358201   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:07.364112   28167 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:10:07.369572   28167 system_pods.go:86] 17 kube-system pods found
	I0803 23:10:07.369606   28167 system_pods.go:89] "coredns-7db6d8ff4d-g4nns" [1c9c7190-c993-4b51-8ba6-62e3ab513836] Running
	I0803 23:10:07.369618   28167 system_pods.go:89] "coredns-7db6d8ff4d-jm52b" [65abad67-6b05-4dbb-8d33-723306bee46f] Running
	I0803 23:10:07.369625   28167 system_pods.go:89] "etcd-ha-076508" [0d38d9a9-4f0f-4928-bd37-010dc1b7623e] Running
	I0803 23:10:07.369630   28167 system_pods.go:89] "etcd-ha-076508-m02" [b473f99f-7b7c-42a2-affc-69b5305ae2e2] Running
	I0803 23:10:07.369636   28167 system_pods.go:89] "kindnet-bpdht" [156017b0-941c-4b32-a73c-4798d48e5434] Running
	I0803 23:10:07.369641   28167 system_pods.go:89] "kindnet-kw254" [fd80828b-1c0f-4a0d-a5d0-f25501e65fd9] Running
	I0803 23:10:07.369648   28167 system_pods.go:89] "kube-apiserver-ha-076508" [975ea5b3-4598-438a-99c6-8c2b644a714b] Running
	I0803 23:10:07.369654   28167 system_pods.go:89] "kube-apiserver-ha-076508-m02" [fdaa8b75-c8a4-444c-9288-6aaec5b31834] Running
	I0803 23:10:07.369661   28167 system_pods.go:89] "kube-controller-manager-ha-076508" [3517b4d5-b6b3-4d39-9f4a-1b8c0ceae246] Running
	I0803 23:10:07.369668   28167 system_pods.go:89] "kube-controller-manager-ha-076508-m02" [f13130bb-619b-475f-ab9d-61422ca1a08b] Running
	I0803 23:10:07.369679   28167 system_pods.go:89] "kube-proxy-hkfgl" [31dca27d-663b-4bfa-8921-547686985835] Running
	I0803 23:10:07.369689   28167 system_pods.go:89] "kube-proxy-jvj96" [cdb6273b-31a8-48bc-8c5a-010363fc2a96] Running
	I0803 23:10:07.369699   28167 system_pods.go:89] "kube-scheduler-ha-076508" [63e9b52f-c7e8-4812-a666-284b2d383067] Running
	I0803 23:10:07.369707   28167 system_pods.go:89] "kube-scheduler-ha-076508-m02" [47cb368b-42e7-44f0-b1b1-40521064569b] Running
	I0803 23:10:07.369716   28167 system_pods.go:89] "kube-vip-ha-076508" [f0640d14-d8df-4fe5-8265-4f1215c2e881] Running
	I0803 23:10:07.369722   28167 system_pods.go:89] "kube-vip-ha-076508-m02" [0e1a3c8d-c1d4-4c29-b674-f13a62d2471c] Running
	I0803 23:10:07.369731   28167 system_pods.go:89] "storage-provisioner" [c98f9062-eff5-48e1-b260-7e8acf8df124] Running
	I0803 23:10:07.369740   28167 system_pods.go:126] duration metric: took 206.098508ms to wait for k8s-apps to be running ...
	I0803 23:10:07.369761   28167 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 23:10:07.369818   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:10:07.392122   28167 system_svc.go:56] duration metric: took 22.355063ms WaitForService to wait for kubelet
	I0803 23:10:07.392147   28167 kubeadm.go:582] duration metric: took 22.713413593s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:10:07.392173   28167 node_conditions.go:102] verifying NodePressure condition ...
	I0803 23:10:07.557747   28167 request.go:629] Waited for 165.500392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes
	I0803 23:10:07.557798   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes
	I0803 23:10:07.557805   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:07.557816   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:07.557825   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:07.561883   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:10:07.562634   28167 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:10:07.562659   28167 node_conditions.go:123] node cpu capacity is 2
	I0803 23:10:07.562679   28167 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:10:07.562682   28167 node_conditions.go:123] node cpu capacity is 2
	I0803 23:10:07.562686   28167 node_conditions.go:105] duration metric: took 170.50921ms to run NodePressure ...
	I0803 23:10:07.562701   28167 start.go:241] waiting for startup goroutines ...
	I0803 23:10:07.562730   28167 start.go:255] writing updated cluster config ...
	I0803 23:10:07.564884   28167 out.go:177] 
	I0803 23:10:07.566769   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:10:07.566929   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:10:07.569128   28167 out.go:177] * Starting "ha-076508-m03" control-plane node in "ha-076508" cluster
	I0803 23:10:07.570611   28167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:10:07.570649   28167 cache.go:56] Caching tarball of preloaded images
	I0803 23:10:07.570811   28167 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:10:07.570829   28167 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:10:07.570961   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:10:07.571239   28167 start.go:360] acquireMachinesLock for ha-076508-m03: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:10:07.571306   28167 start.go:364] duration metric: took 38.243µs to acquireMachinesLock for "ha-076508-m03"
	I0803 23:10:07.571343   28167 start.go:93] Provisioning new machine with config: &{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:10:07.571460   28167 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0803 23:10:07.573238   28167 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:10:07.573404   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:10:07.573449   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:10:07.588630   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45481
	I0803 23:10:07.589135   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:10:07.589608   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:10:07.589630   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:10:07.590095   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:10:07.590298   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetMachineName
	I0803 23:10:07.590494   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:07.590706   28167 start.go:159] libmachine.API.Create for "ha-076508" (driver="kvm2")
	I0803 23:10:07.590740   28167 client.go:168] LocalClient.Create starting
	I0803 23:10:07.590785   28167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0803 23:10:07.590819   28167 main.go:141] libmachine: Decoding PEM data...
	I0803 23:10:07.590833   28167 main.go:141] libmachine: Parsing certificate...
	I0803 23:10:07.590884   28167 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0803 23:10:07.590906   28167 main.go:141] libmachine: Decoding PEM data...
	I0803 23:10:07.590917   28167 main.go:141] libmachine: Parsing certificate...
	I0803 23:10:07.590932   28167 main.go:141] libmachine: Running pre-create checks...
	I0803 23:10:07.590940   28167 main.go:141] libmachine: (ha-076508-m03) Calling .PreCreateCheck
	I0803 23:10:07.591116   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetConfigRaw
	I0803 23:10:07.591656   28167 main.go:141] libmachine: Creating machine...
	I0803 23:10:07.591676   28167 main.go:141] libmachine: (ha-076508-m03) Calling .Create
	I0803 23:10:07.591831   28167 main.go:141] libmachine: (ha-076508-m03) Creating KVM machine...
	I0803 23:10:07.593193   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found existing default KVM network
	I0803 23:10:07.593326   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found existing private KVM network mk-ha-076508
	I0803 23:10:07.593471   28167 main.go:141] libmachine: (ha-076508-m03) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03 ...
	I0803 23:10:07.593532   28167 main.go:141] libmachine: (ha-076508-m03) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:10:07.593618   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:07.593489   29267 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:10:07.593710   28167 main.go:141] libmachine: (ha-076508-m03) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:10:07.827516   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:07.827348   29267 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa...
	I0803 23:10:07.977100   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:07.976988   29267 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/ha-076508-m03.rawdisk...
	I0803 23:10:07.977127   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Writing magic tar header
	I0803 23:10:07.977140   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Writing SSH key tar header
	I0803 23:10:07.977152   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:07.977109   29267 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03 ...
	I0803 23:10:07.977230   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03
	I0803 23:10:07.977253   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0803 23:10:07.977267   28167 main.go:141] libmachine: (ha-076508-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03 (perms=drwx------)
	I0803 23:10:07.977281   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:10:07.977292   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0803 23:10:07.977300   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:10:07.977308   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:10:07.977315   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Checking permissions on dir: /home
	I0803 23:10:07.977325   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Skipping /home - not owner
	I0803 23:10:07.977376   28167 main.go:141] libmachine: (ha-076508-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:10:07.977394   28167 main.go:141] libmachine: (ha-076508-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0803 23:10:07.977407   28167 main.go:141] libmachine: (ha-076508-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0803 23:10:07.977421   28167 main.go:141] libmachine: (ha-076508-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:10:07.977436   28167 main.go:141] libmachine: (ha-076508-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:10:07.977455   28167 main.go:141] libmachine: (ha-076508-m03) Creating domain...
	I0803 23:10:07.978394   28167 main.go:141] libmachine: (ha-076508-m03) define libvirt domain using xml: 
	I0803 23:10:07.978411   28167 main.go:141] libmachine: (ha-076508-m03) <domain type='kvm'>
	I0803 23:10:07.978419   28167 main.go:141] libmachine: (ha-076508-m03)   <name>ha-076508-m03</name>
	I0803 23:10:07.978432   28167 main.go:141] libmachine: (ha-076508-m03)   <memory unit='MiB'>2200</memory>
	I0803 23:10:07.978443   28167 main.go:141] libmachine: (ha-076508-m03)   <vcpu>2</vcpu>
	I0803 23:10:07.978455   28167 main.go:141] libmachine: (ha-076508-m03)   <features>
	I0803 23:10:07.978464   28167 main.go:141] libmachine: (ha-076508-m03)     <acpi/>
	I0803 23:10:07.978475   28167 main.go:141] libmachine: (ha-076508-m03)     <apic/>
	I0803 23:10:07.978486   28167 main.go:141] libmachine: (ha-076508-m03)     <pae/>
	I0803 23:10:07.978495   28167 main.go:141] libmachine: (ha-076508-m03)     
	I0803 23:10:07.978504   28167 main.go:141] libmachine: (ha-076508-m03)   </features>
	I0803 23:10:07.978514   28167 main.go:141] libmachine: (ha-076508-m03)   <cpu mode='host-passthrough'>
	I0803 23:10:07.978538   28167 main.go:141] libmachine: (ha-076508-m03)   
	I0803 23:10:07.978557   28167 main.go:141] libmachine: (ha-076508-m03)   </cpu>
	I0803 23:10:07.978563   28167 main.go:141] libmachine: (ha-076508-m03)   <os>
	I0803 23:10:07.978569   28167 main.go:141] libmachine: (ha-076508-m03)     <type>hvm</type>
	I0803 23:10:07.978575   28167 main.go:141] libmachine: (ha-076508-m03)     <boot dev='cdrom'/>
	I0803 23:10:07.978586   28167 main.go:141] libmachine: (ha-076508-m03)     <boot dev='hd'/>
	I0803 23:10:07.978592   28167 main.go:141] libmachine: (ha-076508-m03)     <bootmenu enable='no'/>
	I0803 23:10:07.978598   28167 main.go:141] libmachine: (ha-076508-m03)   </os>
	I0803 23:10:07.978604   28167 main.go:141] libmachine: (ha-076508-m03)   <devices>
	I0803 23:10:07.978614   28167 main.go:141] libmachine: (ha-076508-m03)     <disk type='file' device='cdrom'>
	I0803 23:10:07.978626   28167 main.go:141] libmachine: (ha-076508-m03)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/boot2docker.iso'/>
	I0803 23:10:07.978632   28167 main.go:141] libmachine: (ha-076508-m03)       <target dev='hdc' bus='scsi'/>
	I0803 23:10:07.978638   28167 main.go:141] libmachine: (ha-076508-m03)       <readonly/>
	I0803 23:10:07.978644   28167 main.go:141] libmachine: (ha-076508-m03)     </disk>
	I0803 23:10:07.978650   28167 main.go:141] libmachine: (ha-076508-m03)     <disk type='file' device='disk'>
	I0803 23:10:07.978665   28167 main.go:141] libmachine: (ha-076508-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:10:07.978675   28167 main.go:141] libmachine: (ha-076508-m03)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/ha-076508-m03.rawdisk'/>
	I0803 23:10:07.978683   28167 main.go:141] libmachine: (ha-076508-m03)       <target dev='hda' bus='virtio'/>
	I0803 23:10:07.978689   28167 main.go:141] libmachine: (ha-076508-m03)     </disk>
	I0803 23:10:07.978698   28167 main.go:141] libmachine: (ha-076508-m03)     <interface type='network'>
	I0803 23:10:07.978710   28167 main.go:141] libmachine: (ha-076508-m03)       <source network='mk-ha-076508'/>
	I0803 23:10:07.978719   28167 main.go:141] libmachine: (ha-076508-m03)       <model type='virtio'/>
	I0803 23:10:07.978743   28167 main.go:141] libmachine: (ha-076508-m03)     </interface>
	I0803 23:10:07.978760   28167 main.go:141] libmachine: (ha-076508-m03)     <interface type='network'>
	I0803 23:10:07.978769   28167 main.go:141] libmachine: (ha-076508-m03)       <source network='default'/>
	I0803 23:10:07.978777   28167 main.go:141] libmachine: (ha-076508-m03)       <model type='virtio'/>
	I0803 23:10:07.978792   28167 main.go:141] libmachine: (ha-076508-m03)     </interface>
	I0803 23:10:07.978808   28167 main.go:141] libmachine: (ha-076508-m03)     <serial type='pty'>
	I0803 23:10:07.978822   28167 main.go:141] libmachine: (ha-076508-m03)       <target port='0'/>
	I0803 23:10:07.978832   28167 main.go:141] libmachine: (ha-076508-m03)     </serial>
	I0803 23:10:07.978843   28167 main.go:141] libmachine: (ha-076508-m03)     <console type='pty'>
	I0803 23:10:07.978849   28167 main.go:141] libmachine: (ha-076508-m03)       <target type='serial' port='0'/>
	I0803 23:10:07.978855   28167 main.go:141] libmachine: (ha-076508-m03)     </console>
	I0803 23:10:07.978868   28167 main.go:141] libmachine: (ha-076508-m03)     <rng model='virtio'>
	I0803 23:10:07.978883   28167 main.go:141] libmachine: (ha-076508-m03)       <backend model='random'>/dev/random</backend>
	I0803 23:10:07.978892   28167 main.go:141] libmachine: (ha-076508-m03)     </rng>
	I0803 23:10:07.978915   28167 main.go:141] libmachine: (ha-076508-m03)     
	I0803 23:10:07.978933   28167 main.go:141] libmachine: (ha-076508-m03)     
	I0803 23:10:07.978943   28167 main.go:141] libmachine: (ha-076508-m03)   </devices>
	I0803 23:10:07.978951   28167 main.go:141] libmachine: (ha-076508-m03) </domain>
	I0803 23:10:07.978966   28167 main.go:141] libmachine: (ha-076508-m03) 
	I0803 23:10:07.987006   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:e0:40:7d in network default
	I0803 23:10:07.987567   28167 main.go:141] libmachine: (ha-076508-m03) Ensuring networks are active...
	I0803 23:10:07.987583   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:07.988584   28167 main.go:141] libmachine: (ha-076508-m03) Ensuring network default is active
	I0803 23:10:07.988860   28167 main.go:141] libmachine: (ha-076508-m03) Ensuring network mk-ha-076508 is active
	I0803 23:10:07.989256   28167 main.go:141] libmachine: (ha-076508-m03) Getting domain xml...
	I0803 23:10:07.990103   28167 main.go:141] libmachine: (ha-076508-m03) Creating domain...
	I0803 23:10:09.248349   28167 main.go:141] libmachine: (ha-076508-m03) Waiting to get IP...
	I0803 23:10:09.249200   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:09.249636   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:09.249689   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:09.249618   29267 retry.go:31] will retry after 285.933143ms: waiting for machine to come up
	I0803 23:10:09.537243   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:09.537744   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:09.537770   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:09.537704   29267 retry.go:31] will retry after 249.301407ms: waiting for machine to come up
	I0803 23:10:09.788109   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:09.788657   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:09.788686   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:09.788601   29267 retry.go:31] will retry after 335.559043ms: waiting for machine to come up
	I0803 23:10:10.126156   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:10.126620   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:10.126650   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:10.126554   29267 retry.go:31] will retry after 474.638702ms: waiting for machine to come up
	I0803 23:10:10.602678   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:10.603108   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:10.603133   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:10.603061   29267 retry.go:31] will retry after 685.693379ms: waiting for machine to come up
	I0803 23:10:11.289879   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:11.290287   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:11.290313   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:11.290238   29267 retry.go:31] will retry after 607.834329ms: waiting for machine to come up
	I0803 23:10:11.899542   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:11.899975   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:11.900003   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:11.899920   29267 retry.go:31] will retry after 1.161412916s: waiting for machine to come up
	I0803 23:10:13.063410   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:13.063935   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:13.063964   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:13.063899   29267 retry.go:31] will retry after 1.250338083s: waiting for machine to come up
	I0803 23:10:14.315473   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:14.315910   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:14.315938   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:14.315861   29267 retry.go:31] will retry after 1.544589706s: waiting for machine to come up
	I0803 23:10:15.862400   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:15.862856   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:15.862873   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:15.862829   29267 retry.go:31] will retry after 1.643124459s: waiting for machine to come up
	I0803 23:10:17.507142   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:17.507682   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:17.507708   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:17.507633   29267 retry.go:31] will retry after 2.036118191s: waiting for machine to come up
	I0803 23:10:19.546457   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:19.547016   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:19.547064   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:19.546973   29267 retry.go:31] will retry after 2.436825652s: waiting for machine to come up
	I0803 23:10:21.986604   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:21.987159   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:21.987185   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:21.987096   29267 retry.go:31] will retry after 3.233370764s: waiting for machine to come up
	I0803 23:10:25.223298   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:25.223812   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find current IP address of domain ha-076508-m03 in network mk-ha-076508
	I0803 23:10:25.223835   28167 main.go:141] libmachine: (ha-076508-m03) DBG | I0803 23:10:25.223775   29267 retry.go:31] will retry after 4.665419653s: waiting for machine to come up
	I0803 23:10:29.890441   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:29.890851   28167 main.go:141] libmachine: (ha-076508-m03) Found IP for machine: 192.168.39.86
	I0803 23:10:29.890873   28167 main.go:141] libmachine: (ha-076508-m03) Reserving static IP address...
	I0803 23:10:29.890889   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has current primary IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:29.891328   28167 main.go:141] libmachine: (ha-076508-m03) DBG | unable to find host DHCP lease matching {name: "ha-076508-m03", mac: "52:54:00:f0:20:c2", ip: "192.168.39.86"} in network mk-ha-076508
	I0803 23:10:29.968716   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Getting to WaitForSSH function...
	I0803 23:10:29.968773   28167 main.go:141] libmachine: (ha-076508-m03) Reserved static IP address: 192.168.39.86
	I0803 23:10:29.968789   28167 main.go:141] libmachine: (ha-076508-m03) Waiting for SSH to be available...
	I0803 23:10:29.971322   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:29.971833   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:29.971860   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:29.972036   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Using SSH client type: external
	I0803 23:10:29.972061   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa (-rw-------)
	I0803 23:10:29.972099   28167 main.go:141] libmachine: (ha-076508-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:10:29.972119   28167 main.go:141] libmachine: (ha-076508-m03) DBG | About to run SSH command:
	I0803 23:10:29.972130   28167 main.go:141] libmachine: (ha-076508-m03) DBG | exit 0
	I0803 23:10:30.097458   28167 main.go:141] libmachine: (ha-076508-m03) DBG | SSH cmd err, output: <nil>: 
	I0803 23:10:30.097656   28167 main.go:141] libmachine: (ha-076508-m03) KVM machine creation complete!
	I0803 23:10:30.098051   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetConfigRaw
	I0803 23:10:30.098550   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:30.098752   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:30.098895   28167 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:10:30.098911   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetState
	I0803 23:10:30.100080   28167 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:10:30.100103   28167 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:10:30.100111   28167 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:10:30.100117   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.102661   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.103076   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.103106   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.103226   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:30.103431   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.103588   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.103724   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:30.103874   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:10:30.104109   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0803 23:10:30.104123   28167 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:10:30.208714   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:10:30.208741   28167 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:10:30.208752   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.211697   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.212050   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.212080   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.212250   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:30.212429   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.212596   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.212772   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:30.212933   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:10:30.213132   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0803 23:10:30.213150   28167 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:10:30.314316   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:10:30.314413   28167 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:10:30.314425   28167 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:10:30.314441   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetMachineName
	I0803 23:10:30.314709   28167 buildroot.go:166] provisioning hostname "ha-076508-m03"
	I0803 23:10:30.314739   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetMachineName
	I0803 23:10:30.314975   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.317995   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.318447   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.318470   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.318551   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:30.318747   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.318921   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.319069   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:30.319229   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:10:30.319432   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0803 23:10:30.319448   28167 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076508-m03 && echo "ha-076508-m03" | sudo tee /etc/hostname
	I0803 23:10:30.435894   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076508-m03
	
	I0803 23:10:30.435924   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.438653   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.438955   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.438980   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.439130   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:30.439306   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.439468   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.439639   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:30.439815   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:10:30.440025   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0803 23:10:30.440043   28167 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076508-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076508-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076508-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:10:30.551603   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:10:30.551635   28167 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 23:10:30.551651   28167 buildroot.go:174] setting up certificates
	I0803 23:10:30.551660   28167 provision.go:84] configureAuth start
	I0803 23:10:30.551668   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetMachineName
	I0803 23:10:30.551973   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:10:30.554323   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.554598   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.554650   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.554762   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.556944   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.557367   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.557395   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.557474   28167 provision.go:143] copyHostCerts
	I0803 23:10:30.557505   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:10:30.557541   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0803 23:10:30.557550   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:10:30.557610   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 23:10:30.557690   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:10:30.557709   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0803 23:10:30.557713   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:10:30.557741   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 23:10:30.557819   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:10:30.557839   28167 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0803 23:10:30.557843   28167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:10:30.557866   28167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 23:10:30.557913   28167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.ha-076508-m03 san=[127.0.0.1 192.168.39.86 ha-076508-m03 localhost minikube]
	I0803 23:10:30.655066   28167 provision.go:177] copyRemoteCerts
	I0803 23:10:30.655117   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:10:30.655138   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.657642   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.657986   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.658015   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.658268   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:30.658485   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.658623   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:30.658764   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:10:30.740302   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:10:30.740367   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 23:10:30.766773   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:10:30.766854   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0803 23:10:30.794641   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:10:30.794705   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0803 23:10:30.821626   28167 provision.go:87] duration metric: took 269.952761ms to configureAuth
	I0803 23:10:30.821653   28167 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:10:30.821926   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:10:30.822025   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:30.825020   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.825452   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:30.825483   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:30.825722   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:30.825965   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.826144   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:30.826280   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:30.826430   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:10:30.826598   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0803 23:10:30.826612   28167 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:10:31.111311   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:10:31.111343   28167 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:10:31.111355   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetURL
	I0803 23:10:31.112737   28167 main.go:141] libmachine: (ha-076508-m03) DBG | Using libvirt version 6000000
	I0803 23:10:31.115452   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.115868   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.115897   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.116133   28167 main.go:141] libmachine: Docker is up and running!
	I0803 23:10:31.116146   28167 main.go:141] libmachine: Reticulating splines...
	I0803 23:10:31.116152   28167 client.go:171] duration metric: took 23.525402572s to LocalClient.Create
	I0803 23:10:31.116173   28167 start.go:167] duration metric: took 23.52546941s to libmachine.API.Create "ha-076508"
	I0803 23:10:31.116188   28167 start.go:293] postStartSetup for "ha-076508-m03" (driver="kvm2")
	I0803 23:10:31.116200   28167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:10:31.116216   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:31.116431   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:10:31.116452   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:31.118369   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.118630   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.118657   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.118808   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:31.118971   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:31.119164   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:31.119312   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:10:31.200460   28167 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:10:31.205806   28167 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:10:31.205840   28167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 23:10:31.205987   28167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 23:10:31.206177   28167 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0803 23:10:31.206194   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0803 23:10:31.206305   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:10:31.218211   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:10:31.244937   28167 start.go:296] duration metric: took 128.728685ms for postStartSetup
	I0803 23:10:31.245009   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetConfigRaw
	I0803 23:10:31.245627   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:10:31.248661   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.249046   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.249067   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.249472   28167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:10:31.249715   28167 start.go:128] duration metric: took 23.678244602s to createHost
	I0803 23:10:31.249756   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:31.252488   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.252922   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.252953   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.253184   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:31.253406   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:31.253616   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:31.253794   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:31.253975   28167 main.go:141] libmachine: Using SSH client type: native
	I0803 23:10:31.254174   28167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.86 22 <nil> <nil>}
	I0803 23:10:31.254190   28167 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:10:31.354522   28167 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722726631.327455448
	
	I0803 23:10:31.354546   28167 fix.go:216] guest clock: 1722726631.327455448
	I0803 23:10:31.354556   28167 fix.go:229] Guest: 2024-08-03 23:10:31.327455448 +0000 UTC Remote: 2024-08-03 23:10:31.249737563 +0000 UTC m=+223.792678543 (delta=77.717885ms)
	I0803 23:10:31.354580   28167 fix.go:200] guest clock delta is within tolerance: 77.717885ms
	I0803 23:10:31.354587   28167 start.go:83] releasing machines lock for "ha-076508-m03", held for 23.783271299s
	I0803 23:10:31.354611   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:31.354933   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:10:31.358012   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.358446   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.358474   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.360942   28167 out.go:177] * Found network options:
	I0803 23:10:31.362445   28167 out.go:177]   - NO_PROXY=192.168.39.154,192.168.39.245
	W0803 23:10:31.363709   28167 proxy.go:119] fail to check proxy env: Error ip not in block
	W0803 23:10:31.363733   28167 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:10:31.363747   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:31.364321   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:31.364504   28167 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:10:31.364600   28167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:10:31.364632   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	W0803 23:10:31.364726   28167 proxy.go:119] fail to check proxy env: Error ip not in block
	W0803 23:10:31.364750   28167 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:10:31.364853   28167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:10:31.364875   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:10:31.367662   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.367686   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.368094   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.368129   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:31.368158   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.368185   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:31.368295   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:31.368415   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:10:31.368462   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:31.368541   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:10:31.368611   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:31.368672   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:10:31.368724   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:10:31.368783   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:10:31.614534   28167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:10:31.621221   28167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:10:31.621279   28167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:10:31.640586   28167 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:10:31.640614   28167 start.go:495] detecting cgroup driver to use...
	I0803 23:10:31.640697   28167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:10:31.661027   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:10:31.677884   28167 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:10:31.677966   28167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:10:31.694226   28167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:10:31.708499   28167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:10:31.825583   28167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:10:31.997400   28167 docker.go:233] disabling docker service ...
	I0803 23:10:31.997472   28167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:10:32.012727   28167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:10:32.026457   28167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:10:32.154114   28167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:10:32.278557   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:10:32.295162   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:10:32.315162   28167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:10:32.315252   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.326283   28167 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:10:32.326343   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.338237   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.349853   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.361904   28167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:10:32.373916   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.385926   28167 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.406945   28167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:10:32.421449   28167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:10:32.431888   28167 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:10:32.431957   28167 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:10:32.448988   28167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:10:32.459616   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:10:32.588640   28167 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:10:32.726397   28167 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:10:32.726470   28167 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:10:32.731304   28167 start.go:563] Will wait 60s for crictl version
	I0803 23:10:32.731349   28167 ssh_runner.go:195] Run: which crictl
	I0803 23:10:32.735182   28167 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:10:32.774180   28167 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:10:32.774271   28167 ssh_runner.go:195] Run: crio --version
	I0803 23:10:32.804446   28167 ssh_runner.go:195] Run: crio --version
	I0803 23:10:32.836356   28167 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:10:32.837737   28167 out.go:177]   - env NO_PROXY=192.168.39.154
	I0803 23:10:32.838985   28167 out.go:177]   - env NO_PROXY=192.168.39.154,192.168.39.245
	I0803 23:10:32.840314   28167 main.go:141] libmachine: (ha-076508-m03) Calling .GetIP
	I0803 23:10:32.843214   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:32.843728   28167 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:10:32.843754   28167 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:10:32.843977   28167 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:10:32.848385   28167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:10:32.861630   28167 mustload.go:65] Loading cluster: ha-076508
	I0803 23:10:32.861891   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:10:32.862154   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:10:32.862192   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:10:32.877838   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43811
	I0803 23:10:32.878216   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:10:32.878763   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:10:32.878783   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:10:32.879142   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:10:32.879328   28167 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:10:32.880742   28167 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:10:32.881034   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:10:32.881066   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:10:32.896078   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45901
	I0803 23:10:32.896488   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:10:32.896941   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:10:32.896964   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:10:32.897260   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:10:32.897452   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:10:32.897618   28167 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508 for IP: 192.168.39.86
	I0803 23:10:32.897629   28167 certs.go:194] generating shared ca certs ...
	I0803 23:10:32.897645   28167 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:10:32.897787   28167 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 23:10:32.897840   28167 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 23:10:32.897857   28167 certs.go:256] generating profile certs ...
	I0803 23:10:32.897967   28167 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key
	I0803 23:10:32.897998   28167 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.f5620537
	I0803 23:10:32.898022   28167 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.f5620537 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.154 192.168.39.245 192.168.39.86 192.168.39.254]
	I0803 23:10:33.154134   28167 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.f5620537 ...
	I0803 23:10:33.154168   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.f5620537: {Name:mk682d6ecfb96dbed7a4b277a1a22d21b911660e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:10:33.154392   28167 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.f5620537 ...
	I0803 23:10:33.154411   28167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.f5620537: {Name:mk9319630d2a2fe9289f3b8bdf9a93cb217ef0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:10:33.154554   28167 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.f5620537 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt
	I0803 23:10:33.154758   28167 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.f5620537 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key
	I0803 23:10:33.154959   28167 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key
	I0803 23:10:33.154984   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:10:33.155009   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:10:33.155032   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:10:33.155055   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:10:33.155074   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:10:33.155097   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:10:33.155121   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:10:33.155142   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:10:33.155214   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0803 23:10:33.155258   28167 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0803 23:10:33.155274   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 23:10:33.155308   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 23:10:33.155347   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:10:33.155387   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 23:10:33.155450   28167 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:10:33.155493   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:10:33.155516   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0803 23:10:33.155538   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0803 23:10:33.155585   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:10:33.158701   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:10:33.159165   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:10:33.159195   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:10:33.159391   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:10:33.159619   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:10:33.159818   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:10:33.159994   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:10:33.241782   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0803 23:10:33.247573   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0803 23:10:33.261083   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0803 23:10:33.265717   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0803 23:10:33.278224   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0803 23:10:33.283382   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0803 23:10:33.295572   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0803 23:10:33.300485   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0803 23:10:33.320637   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0803 23:10:33.325493   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0803 23:10:33.339599   28167 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0803 23:10:33.345306   28167 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0803 23:10:33.358174   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:10:33.386515   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:10:33.413013   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:10:33.437564   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 23:10:33.462725   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0803 23:10:33.488324   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 23:10:33.516049   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:10:33.545077   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:10:33.571339   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:10:33.598911   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0803 23:10:33.624365   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0803 23:10:33.650645   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0803 23:10:33.668748   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0803 23:10:33.685967   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0803 23:10:33.702890   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0803 23:10:33.720867   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0803 23:10:33.738443   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0803 23:10:33.756919   28167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0803 23:10:33.774564   28167 ssh_runner.go:195] Run: openssl version
	I0803 23:10:33.781110   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:10:33.793528   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:10:33.798625   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:10:33.798691   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:10:33.804731   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:10:33.815861   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0803 23:10:33.828021   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0803 23:10:33.833448   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:10:33.833508   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0803 23:10:33.839587   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0803 23:10:33.850838   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0803 23:10:33.862244   28167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0803 23:10:33.867289   28167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:10:33.867355   28167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0803 23:10:33.873268   28167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:10:33.885300   28167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:10:33.890043   28167 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:10:33.890102   28167 kubeadm.go:934] updating node {m03 192.168.39.86 8443 v1.30.3 crio true true} ...
	I0803 23:10:33.890215   28167 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076508-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:10:33.890247   28167 kube-vip.go:115] generating kube-vip config ...
	I0803 23:10:33.890296   28167 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:10:33.908038   28167 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:10:33.908123   28167 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:10:33.908185   28167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:10:33.918317   28167 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0803 23:10:33.918388   28167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0803 23:10:33.928855   28167 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0803 23:10:33.928882   28167 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0803 23:10:33.928899   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:10:33.928908   28167 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0803 23:10:33.928925   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:10:33.928929   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:10:33.928974   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:10:33.928988   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:10:33.935130   28167 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0803 23:10:33.935162   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0803 23:10:33.967603   28167 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0803 23:10:33.967652   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0803 23:10:33.967667   28167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:10:33.967745   28167 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:10:34.015821   28167 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0803 23:10:34.015864   28167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0803 23:10:34.859585   28167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0803 23:10:34.870219   28167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0803 23:10:34.889025   28167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:10:34.908786   28167 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0803 23:10:34.929133   28167 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:10:34.933650   28167 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:10:34.947347   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:10:35.076750   28167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:10:35.095743   28167 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:10:35.096190   28167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:10:35.096239   28167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:10:35.112256   28167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41077
	I0803 23:10:35.112710   28167 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:10:35.113188   28167 main.go:141] libmachine: Using API Version  1
	I0803 23:10:35.113215   28167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:10:35.113579   28167 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:10:35.113806   28167 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:10:35.114009   28167 start.go:317] joinCluster: &{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:10:35.114170   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0803 23:10:35.114187   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:10:35.117526   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:10:35.118039   28167 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:10:35.118085   28167 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:10:35.118254   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:10:35.118445   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:10:35.118633   28167 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:10:35.118784   28167 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:10:35.296853   28167 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:10:35.296901   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9kwipv.ic5tyi0dwv1kfzk7 --discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076508-m03 --control-plane --apiserver-advertise-address=192.168.39.86 --apiserver-bind-port=8443"
	I0803 23:10:57.974189   28167 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9kwipv.ic5tyi0dwv1kfzk7 --discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-076508-m03 --control-plane --apiserver-advertise-address=192.168.39.86 --apiserver-bind-port=8443": (22.677248756s)
	I0803 23:10:57.974236   28167 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0803 23:10:58.624270   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-076508-m03 minikube.k8s.io/updated_at=2024_08_03T23_10_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=ha-076508 minikube.k8s.io/primary=false
	I0803 23:10:58.762992   28167 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-076508-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0803 23:10:58.902499   28167 start.go:319] duration metric: took 23.788486948s to joinCluster
	I0803 23:10:58.902601   28167 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:10:58.902952   28167 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:10:58.904243   28167 out.go:177] * Verifying Kubernetes components...
	I0803 23:10:58.905653   28167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:10:59.196121   28167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:10:59.238713   28167 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:10:59.238967   28167 kapi.go:59] client config for ha-076508: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key", CAFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0803 23:10:59.239048   28167 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.154:8443
	I0803 23:10:59.239366   28167 node_ready.go:35] waiting up to 6m0s for node "ha-076508-m03" to be "Ready" ...
	I0803 23:10:59.239462   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:10:59.239473   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:59.239483   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:59.239490   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:59.243182   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:10:59.739904   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:10:59.739927   28167 round_trippers.go:469] Request Headers:
	I0803 23:10:59.739938   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:10:59.739944   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:10:59.743828   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:00.239826   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:00.239851   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:00.239861   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:00.239866   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:00.247137   28167 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0803 23:11:00.740151   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:00.740179   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:00.740188   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:00.740192   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:00.744188   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:01.240343   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:01.240365   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:01.240373   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:01.240377   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:01.244256   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:01.245076   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:01.740544   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:01.740565   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:01.740573   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:01.740578   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:01.744202   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:02.240319   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:02.240339   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:02.240347   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:02.240351   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:02.244191   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:02.739893   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:02.739920   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:02.739932   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:02.739937   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:02.743986   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:03.240246   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:03.240272   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:03.240286   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:03.240298   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:03.243994   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:03.739696   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:03.739717   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:03.739725   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:03.739730   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:03.743510   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:03.744170   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:04.239725   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:04.239744   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:04.239753   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:04.239757   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:04.244068   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:04.740393   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:04.740414   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:04.740422   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:04.740426   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:04.743938   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:05.239873   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:05.239901   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:05.239911   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:05.239916   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:05.243398   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:05.740403   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:05.740423   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:05.740431   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:05.740434   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:05.744314   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:05.745066   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:06.240357   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:06.240379   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:06.240387   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:06.240390   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:06.244366   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:06.740288   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:06.740311   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:06.740320   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:06.740323   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:06.743678   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:07.240322   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:07.240351   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:07.240361   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:07.240369   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:07.244184   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:07.740196   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:07.740219   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:07.740228   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:07.740231   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:07.743629   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:08.239630   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:08.239653   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:08.239663   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:08.239667   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:08.243194   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:08.243884   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:08.740350   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:08.740377   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:08.740387   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:08.740394   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:08.743980   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:09.239860   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:09.239881   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:09.239892   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:09.239897   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:09.243737   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:09.739830   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:09.739851   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:09.739858   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:09.739861   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:09.743539   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:10.240370   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:10.240391   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:10.240399   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:10.240402   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:10.243764   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:10.244359   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:10.740558   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:10.740579   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:10.740587   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:10.740591   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:10.744298   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:11.239575   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:11.239597   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:11.239606   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:11.239610   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:11.243363   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:11.739985   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:11.740007   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:11.740015   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:11.740020   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:11.743671   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:12.239986   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:12.240009   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:12.240017   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:12.240022   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:12.243616   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:12.740619   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:12.740642   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:12.740652   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:12.740660   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:12.744548   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:12.745257   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:13.240499   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:13.240520   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:13.240528   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:13.240532   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:13.244373   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:13.739833   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:13.739855   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:13.739865   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:13.739871   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:13.745530   28167 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:11:14.239857   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:14.239886   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:14.239895   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:14.239902   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:14.243621   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:14.739715   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:14.739735   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:14.739745   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:14.739750   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:14.743484   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:15.240627   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:15.240652   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:15.240664   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:15.240670   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:15.244239   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:15.244932   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:15.740084   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:15.740109   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:15.740117   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:15.740121   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:15.743654   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:16.239580   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:16.239599   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:16.239608   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:16.239612   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:16.243332   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:16.740428   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:16.740449   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:16.740457   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:16.740462   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:16.743902   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:17.240042   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:17.240065   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.240075   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.240081   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.244451   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:17.245084   28167 node_ready.go:53] node "ha-076508-m03" has status "Ready":"False"
	I0803 23:11:17.739546   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:17.739570   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.739582   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.739592   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.742972   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:17.743691   28167 node_ready.go:49] node "ha-076508-m03" has status "Ready":"True"
	I0803 23:11:17.743712   28167 node_ready.go:38] duration metric: took 18.504330252s for node "ha-076508-m03" to be "Ready" ...
	I0803 23:11:17.743722   28167 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:11:17.743799   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:11:17.743812   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.743822   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.743833   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.750270   28167 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:11:17.759237   28167 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g4nns" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.759337   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-g4nns
	I0803 23:11:17.759351   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.759362   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.759366   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.762579   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:17.763292   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:17.763308   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.763316   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.763320   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.766063   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.766587   28167 pod_ready.go:92] pod "coredns-7db6d8ff4d-g4nns" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:17.766604   28167 pod_ready.go:81] duration metric: took 7.337575ms for pod "coredns-7db6d8ff4d-g4nns" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.766612   28167 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jm52b" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.766662   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jm52b
	I0803 23:11:17.766669   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.766676   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.766680   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.769067   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.769729   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:17.769742   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.769749   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.769754   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.772200   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.772747   28167 pod_ready.go:92] pod "coredns-7db6d8ff4d-jm52b" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:17.772766   28167 pod_ready.go:81] duration metric: took 6.14586ms for pod "coredns-7db6d8ff4d-jm52b" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.772778   28167 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.772853   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508
	I0803 23:11:17.772863   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.772870   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.772874   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.775414   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.775905   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:17.775947   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.775966   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.775975   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.778731   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.779137   28167 pod_ready.go:92] pod "etcd-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:17.779155   28167 pod_ready.go:81] duration metric: took 6.368718ms for pod "etcd-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.779167   28167 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.779221   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508-m02
	I0803 23:11:17.779232   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.779243   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.779249   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.781797   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.782355   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:17.782379   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.782389   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.782396   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.784884   28167 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:11:17.785468   28167 pod_ready.go:92] pod "etcd-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:17.785487   28167 pod_ready.go:81] duration metric: took 6.312132ms for pod "etcd-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.785500   28167 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:17.940418   28167 request.go:629] Waited for 154.860404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508-m03
	I0803 23:11:17.940479   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508-m03
	I0803 23:11:17.940486   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:17.940496   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:17.940502   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:17.943696   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:18.139691   28167 request.go:629] Waited for 195.313563ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:18.139744   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:18.139749   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:18.139757   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:18.139761   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:18.144233   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:18.145050   28167 pod_ready.go:92] pod "etcd-ha-076508-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:18.145077   28167 pod_ready.go:81] duration metric: took 359.569179ms for pod "etcd-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:18.145099   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:18.340127   28167 request.go:629] Waited for 194.966235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508
	I0803 23:11:18.340186   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508
	I0803 23:11:18.340194   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:18.340201   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:18.340208   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:18.343635   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:18.539663   28167 request.go:629] Waited for 195.273755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:18.539711   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:18.539716   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:18.539723   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:18.539729   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:18.543681   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:18.544389   28167 pod_ready.go:92] pod "kube-apiserver-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:18.544416   28167 pod_ready.go:81] duration metric: took 399.307162ms for pod "kube-apiserver-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:18.544429   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:18.740437   28167 request.go:629] Waited for 195.929852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508-m02
	I0803 23:11:18.740507   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508-m02
	I0803 23:11:18.740514   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:18.740525   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:18.740532   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:18.743910   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:18.940148   28167 request.go:629] Waited for 195.390341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:18.940208   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:18.940214   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:18.940224   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:18.940232   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:18.943491   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:18.943930   28167 pod_ready.go:92] pod "kube-apiserver-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:18.943946   28167 pod_ready.go:81] duration metric: took 399.509056ms for pod "kube-apiserver-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:18.943955   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:19.140099   28167 request.go:629] Waited for 196.077883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508-m03
	I0803 23:11:19.140168   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-076508-m03
	I0803 23:11:19.140174   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:19.140181   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:19.140187   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:19.144623   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:19.339857   28167 request.go:629] Waited for 194.358756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:19.339922   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:19.339930   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:19.339940   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:19.339946   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:19.343508   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:19.344247   28167 pod_ready.go:92] pod "kube-apiserver-ha-076508-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:19.344266   28167 pod_ready.go:81] duration metric: took 400.304551ms for pod "kube-apiserver-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:19.344276   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:19.540358   28167 request.go:629] Waited for 196.023302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508
	I0803 23:11:19.540431   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508
	I0803 23:11:19.540438   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:19.540448   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:19.540458   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:19.544200   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:19.740324   28167 request.go:629] Waited for 195.356736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:19.740377   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:19.740382   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:19.740390   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:19.740394   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:19.743827   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:19.744761   28167 pod_ready.go:92] pod "kube-controller-manager-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:19.744778   28167 pod_ready.go:81] duration metric: took 400.494408ms for pod "kube-controller-manager-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:19.744792   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:19.940275   28167 request.go:629] Waited for 195.423466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508-m02
	I0803 23:11:19.940362   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508-m02
	I0803 23:11:19.940373   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:19.940384   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:19.940391   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:19.944244   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:20.140475   28167 request.go:629] Waited for 195.324746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:20.140541   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:20.140547   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:20.140557   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:20.140564   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:20.144353   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:20.145194   28167 pod_ready.go:92] pod "kube-controller-manager-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:20.145214   28167 pod_ready.go:81] duration metric: took 400.413105ms for pod "kube-controller-manager-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:20.145224   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:20.340356   28167 request.go:629] Waited for 195.046793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508-m03
	I0803 23:11:20.340418   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-076508-m03
	I0803 23:11:20.340425   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:20.340437   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:20.340449   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:20.343958   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:20.539967   28167 request.go:629] Waited for 195.367001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:20.540154   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:20.540175   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:20.540187   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:20.540192   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:20.543615   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:20.544156   28167 pod_ready.go:92] pod "kube-controller-manager-ha-076508-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:20.544177   28167 pod_ready.go:81] duration metric: took 398.945931ms for pod "kube-controller-manager-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:20.544190   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7kmfh" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:20.739937   28167 request.go:629] Waited for 195.685945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kmfh
	I0803 23:11:20.740015   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7kmfh
	I0803 23:11:20.740024   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:20.740033   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:20.740041   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:20.743950   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:20.940068   28167 request.go:629] Waited for 195.366819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:20.940173   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:20.940194   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:20.940203   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:20.940211   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:20.943592   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:20.944210   28167 pod_ready.go:92] pod "kube-proxy-7kmfh" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:20.944230   28167 pod_ready.go:81] duration metric: took 400.028865ms for pod "kube-proxy-7kmfh" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:20.944243   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hkfgl" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:21.140332   28167 request.go:629] Waited for 196.016119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkfgl
	I0803 23:11:21.140411   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hkfgl
	I0803 23:11:21.140424   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:21.140435   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:21.140441   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:21.144074   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:21.340110   28167 request.go:629] Waited for 195.174379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:21.340168   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:21.340173   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:21.340181   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:21.340188   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:21.343734   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:21.344378   28167 pod_ready.go:92] pod "kube-proxy-hkfgl" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:21.344396   28167 pod_ready.go:81] duration metric: took 400.141836ms for pod "kube-proxy-hkfgl" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:21.344406   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jvj96" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:21.540579   28167 request.go:629] Waited for 196.118535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jvj96
	I0803 23:11:21.540671   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jvj96
	I0803 23:11:21.540682   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:21.540694   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:21.540702   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:21.544883   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:21.740073   28167 request.go:629] Waited for 194.356205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:21.740133   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:21.740140   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:21.740151   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:21.740161   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:21.743418   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:21.744057   28167 pod_ready.go:92] pod "kube-proxy-jvj96" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:21.744072   28167 pod_ready.go:81] duration metric: took 399.661209ms for pod "kube-proxy-jvj96" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:21.744081   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:21.940238   28167 request.go:629] Waited for 196.090504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508
	I0803 23:11:21.940298   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508
	I0803 23:11:21.940306   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:21.940315   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:21.940321   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:21.943662   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:22.139622   28167 request.go:629] Waited for 195.274565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:22.139721   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508
	I0803 23:11:22.139734   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:22.139745   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:22.139753   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:22.144185   28167 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:11:22.145072   28167 pod_ready.go:92] pod "kube-scheduler-ha-076508" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:22.145091   28167 pod_ready.go:81] duration metric: took 401.003535ms for pod "kube-scheduler-ha-076508" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:22.145100   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:22.339598   28167 request.go:629] Waited for 194.41909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508-m02
	I0803 23:11:22.339661   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508-m02
	I0803 23:11:22.339667   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:22.339674   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:22.339679   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:22.343092   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:22.539739   28167 request.go:629] Waited for 196.145646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:22.539808   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m02
	I0803 23:11:22.539813   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:22.539820   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:22.539825   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:22.543323   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:22.543946   28167 pod_ready.go:92] pod "kube-scheduler-ha-076508-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:22.543965   28167 pod_ready.go:81] duration metric: took 398.855106ms for pod "kube-scheduler-ha-076508-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:22.543974   28167 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:22.739975   28167 request.go:629] Waited for 195.945481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508-m03
	I0803 23:11:22.740046   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-076508-m03
	I0803 23:11:22.740054   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:22.740064   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:22.740074   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:22.743288   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:22.940026   28167 request.go:629] Waited for 195.97419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:22.940101   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes/ha-076508-m03
	I0803 23:11:22.940107   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:22.940114   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:22.940121   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:22.943834   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:22.944320   28167 pod_ready.go:92] pod "kube-scheduler-ha-076508-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:11:22.944341   28167 pod_ready.go:81] duration metric: took 400.36089ms for pod "kube-scheduler-ha-076508-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:11:22.944354   28167 pod_ready.go:38] duration metric: took 5.200616734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:11:22.944371   28167 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:11:22.944420   28167 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:11:22.959767   28167 api_server.go:72] duration metric: took 24.05713082s to wait for apiserver process to appear ...
	I0803 23:11:22.959810   28167 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:11:22.959829   28167 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
	I0803 23:11:22.964677   28167 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
	ok
	I0803 23:11:22.964737   28167 round_trippers.go:463] GET https://192.168.39.154:8443/version
	I0803 23:11:22.964745   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:22.964752   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:22.964759   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:22.965925   28167 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0803 23:11:22.965986   28167 api_server.go:141] control plane version: v1.30.3
	I0803 23:11:22.965996   28167 api_server.go:131] duration metric: took 6.180078ms to wait for apiserver health ...
	I0803 23:11:22.966006   28167 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 23:11:23.140292   28167 request.go:629] Waited for 174.228703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:11:23.140372   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:11:23.140378   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:23.140385   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:23.140390   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:23.147378   28167 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:11:23.154675   28167 system_pods.go:59] 24 kube-system pods found
	I0803 23:11:23.154710   28167 system_pods.go:61] "coredns-7db6d8ff4d-g4nns" [1c9c7190-c993-4b51-8ba6-62e3ab513836] Running
	I0803 23:11:23.154717   28167 system_pods.go:61] "coredns-7db6d8ff4d-jm52b" [65abad67-6b05-4dbb-8d33-723306bee46f] Running
	I0803 23:11:23.154723   28167 system_pods.go:61] "etcd-ha-076508" [0d38d9a9-4f0f-4928-bd37-010dc1b7623e] Running
	I0803 23:11:23.154727   28167 system_pods.go:61] "etcd-ha-076508-m02" [b473f99f-7b7c-42a2-affc-69b5305ae2e2] Running
	I0803 23:11:23.154732   28167 system_pods.go:61] "etcd-ha-076508-m03" [e13f1f48-0494-4c42-852b-34bb56b06d64] Running
	I0803 23:11:23.154737   28167 system_pods.go:61] "kindnet-bpdht" [156017b0-941c-4b32-a73c-4798d48e5434] Running
	I0803 23:11:23.154742   28167 system_pods.go:61] "kindnet-kw254" [fd80828b-1c0f-4a0d-a5d0-f25501e65fd9] Running
	I0803 23:11:23.154746   28167 system_pods.go:61] "kindnet-tzzq4" [42e5000f-b60a-404c-9e0a-0a414d305d03] Running
	I0803 23:11:23.154751   28167 system_pods.go:61] "kube-apiserver-ha-076508" [975ea5b3-4598-438a-99c6-8c2b644a714b] Running
	I0803 23:11:23.154757   28167 system_pods.go:61] "kube-apiserver-ha-076508-m02" [fdaa8b75-c8a4-444c-9288-6aaec5b31834] Running
	I0803 23:11:23.154765   28167 system_pods.go:61] "kube-apiserver-ha-076508-m03" [035ef875-a6d9-40c6-982e-8fe6200ab98e] Running
	I0803 23:11:23.154774   28167 system_pods.go:61] "kube-controller-manager-ha-076508" [3517b4d5-b6b3-4d39-9f4a-1b8c0ceae246] Running
	I0803 23:11:23.154779   28167 system_pods.go:61] "kube-controller-manager-ha-076508-m02" [f13130bb-619b-475f-ab9d-61422ca1a08b] Running
	I0803 23:11:23.154787   28167 system_pods.go:61] "kube-controller-manager-ha-076508-m03" [108437fc-1c9a-4729-9d08-ebaf35e67bad] Running
	I0803 23:11:23.154791   28167 system_pods.go:61] "kube-proxy-7kmfh" [5bc5276d-480b-4c95-b6c2-0cbb2898d290] Running
	I0803 23:11:23.154796   28167 system_pods.go:61] "kube-proxy-hkfgl" [31dca27d-663b-4bfa-8921-547686985835] Running
	I0803 23:11:23.154801   28167 system_pods.go:61] "kube-proxy-jvj96" [cdb6273b-31a8-48bc-8c5a-010363fc2a96] Running
	I0803 23:11:23.154807   28167 system_pods.go:61] "kube-scheduler-ha-076508" [63e9b52f-c7e8-4812-a666-284b2d383067] Running
	I0803 23:11:23.154813   28167 system_pods.go:61] "kube-scheduler-ha-076508-m02" [47cb368b-42e7-44f0-b1b1-40521064569b] Running
	I0803 23:11:23.154820   28167 system_pods.go:61] "kube-scheduler-ha-076508-m03" [ead599ec-1d46-4457-850d-d189b57597c5] Running
	I0803 23:11:23.154825   28167 system_pods.go:61] "kube-vip-ha-076508" [f0640d14-d8df-4fe5-8265-4f1215c2e881] Running
	I0803 23:11:23.154831   28167 system_pods.go:61] "kube-vip-ha-076508-m02" [0e1a3c8d-c1d4-4c29-b674-f13a62d2471c] Running
	I0803 23:11:23.154835   28167 system_pods.go:61] "kube-vip-ha-076508-m03" [61ffbdc1-4caa-450c-8c00-29bca8fccd59] Running
	I0803 23:11:23.154842   28167 system_pods.go:61] "storage-provisioner" [c98f9062-eff5-48e1-b260-7e8acf8df124] Running
	I0803 23:11:23.154848   28167 system_pods.go:74] duration metric: took 188.836543ms to wait for pod list to return data ...
	I0803 23:11:23.154858   28167 default_sa.go:34] waiting for default service account to be created ...
	I0803 23:11:23.340268   28167 request.go:629] Waited for 185.343692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:11:23.340326   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:11:23.340333   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:23.340344   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:23.340349   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:23.343708   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:23.343878   28167 default_sa.go:45] found service account: "default"
	I0803 23:11:23.343901   28167 default_sa.go:55] duration metric: took 189.036567ms for default service account to be created ...
	I0803 23:11:23.343911   28167 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 23:11:23.540341   28167 request.go:629] Waited for 196.355004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:11:23.540410   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/namespaces/kube-system/pods
	I0803 23:11:23.540419   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:23.540429   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:23.540439   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:23.547038   28167 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:11:23.553622   28167 system_pods.go:86] 24 kube-system pods found
	I0803 23:11:23.553643   28167 system_pods.go:89] "coredns-7db6d8ff4d-g4nns" [1c9c7190-c993-4b51-8ba6-62e3ab513836] Running
	I0803 23:11:23.553649   28167 system_pods.go:89] "coredns-7db6d8ff4d-jm52b" [65abad67-6b05-4dbb-8d33-723306bee46f] Running
	I0803 23:11:23.553653   28167 system_pods.go:89] "etcd-ha-076508" [0d38d9a9-4f0f-4928-bd37-010dc1b7623e] Running
	I0803 23:11:23.553657   28167 system_pods.go:89] "etcd-ha-076508-m02" [b473f99f-7b7c-42a2-affc-69b5305ae2e2] Running
	I0803 23:11:23.553662   28167 system_pods.go:89] "etcd-ha-076508-m03" [e13f1f48-0494-4c42-852b-34bb56b06d64] Running
	I0803 23:11:23.553665   28167 system_pods.go:89] "kindnet-bpdht" [156017b0-941c-4b32-a73c-4798d48e5434] Running
	I0803 23:11:23.553669   28167 system_pods.go:89] "kindnet-kw254" [fd80828b-1c0f-4a0d-a5d0-f25501e65fd9] Running
	I0803 23:11:23.553673   28167 system_pods.go:89] "kindnet-tzzq4" [42e5000f-b60a-404c-9e0a-0a414d305d03] Running
	I0803 23:11:23.553677   28167 system_pods.go:89] "kube-apiserver-ha-076508" [975ea5b3-4598-438a-99c6-8c2b644a714b] Running
	I0803 23:11:23.553682   28167 system_pods.go:89] "kube-apiserver-ha-076508-m02" [fdaa8b75-c8a4-444c-9288-6aaec5b31834] Running
	I0803 23:11:23.553688   28167 system_pods.go:89] "kube-apiserver-ha-076508-m03" [035ef875-a6d9-40c6-982e-8fe6200ab98e] Running
	I0803 23:11:23.553693   28167 system_pods.go:89] "kube-controller-manager-ha-076508" [3517b4d5-b6b3-4d39-9f4a-1b8c0ceae246] Running
	I0803 23:11:23.553700   28167 system_pods.go:89] "kube-controller-manager-ha-076508-m02" [f13130bb-619b-475f-ab9d-61422ca1a08b] Running
	I0803 23:11:23.553705   28167 system_pods.go:89] "kube-controller-manager-ha-076508-m03" [108437fc-1c9a-4729-9d08-ebaf35e67bad] Running
	I0803 23:11:23.553709   28167 system_pods.go:89] "kube-proxy-7kmfh" [5bc5276d-480b-4c95-b6c2-0cbb2898d290] Running
	I0803 23:11:23.553712   28167 system_pods.go:89] "kube-proxy-hkfgl" [31dca27d-663b-4bfa-8921-547686985835] Running
	I0803 23:11:23.553717   28167 system_pods.go:89] "kube-proxy-jvj96" [cdb6273b-31a8-48bc-8c5a-010363fc2a96] Running
	I0803 23:11:23.553722   28167 system_pods.go:89] "kube-scheduler-ha-076508" [63e9b52f-c7e8-4812-a666-284b2d383067] Running
	I0803 23:11:23.553728   28167 system_pods.go:89] "kube-scheduler-ha-076508-m02" [47cb368b-42e7-44f0-b1b1-40521064569b] Running
	I0803 23:11:23.553732   28167 system_pods.go:89] "kube-scheduler-ha-076508-m03" [ead599ec-1d46-4457-850d-d189b57597c5] Running
	I0803 23:11:23.553738   28167 system_pods.go:89] "kube-vip-ha-076508" [f0640d14-d8df-4fe5-8265-4f1215c2e881] Running
	I0803 23:11:23.553741   28167 system_pods.go:89] "kube-vip-ha-076508-m02" [0e1a3c8d-c1d4-4c29-b674-f13a62d2471c] Running
	I0803 23:11:23.553747   28167 system_pods.go:89] "kube-vip-ha-076508-m03" [61ffbdc1-4caa-450c-8c00-29bca8fccd59] Running
	I0803 23:11:23.553750   28167 system_pods.go:89] "storage-provisioner" [c98f9062-eff5-48e1-b260-7e8acf8df124] Running
	I0803 23:11:23.553756   28167 system_pods.go:126] duration metric: took 209.840827ms to wait for k8s-apps to be running ...
	I0803 23:11:23.553766   28167 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 23:11:23.553809   28167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:11:23.572164   28167 system_svc.go:56] duration metric: took 18.390119ms WaitForService to wait for kubelet
	I0803 23:11:23.572192   28167 kubeadm.go:582] duration metric: took 24.669558424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:11:23.572209   28167 node_conditions.go:102] verifying NodePressure condition ...
	I0803 23:11:23.739507   28167 request.go:629] Waited for 167.239058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.154:8443/api/v1/nodes
	I0803 23:11:23.739574   28167 round_trippers.go:463] GET https://192.168.39.154:8443/api/v1/nodes
	I0803 23:11:23.739579   28167 round_trippers.go:469] Request Headers:
	I0803 23:11:23.739587   28167 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:11:23.739594   28167 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:11:23.743056   28167 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:11:23.743986   28167 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:11:23.744014   28167 node_conditions.go:123] node cpu capacity is 2
	I0803 23:11:23.744032   28167 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:11:23.744043   28167 node_conditions.go:123] node cpu capacity is 2
	I0803 23:11:23.744067   28167 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:11:23.744076   28167 node_conditions.go:123] node cpu capacity is 2
	I0803 23:11:23.744082   28167 node_conditions.go:105] duration metric: took 171.868142ms to run NodePressure ...
	I0803 23:11:23.744097   28167 start.go:241] waiting for startup goroutines ...
	I0803 23:11:23.744125   28167 start.go:255] writing updated cluster config ...
	I0803 23:11:23.744410   28167 ssh_runner.go:195] Run: rm -f paused
	I0803 23:11:23.794827   28167 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0803 23:11:23.796773   28167 out.go:177] * Done! kubectl is now configured to use "ha-076508" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.713549048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14f56e32-4cef-4536-8025-a2a41e353762 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.714012816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722726966713987994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14f56e32-4cef-4536-8025-a2a41e353762 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.714558871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efbb1ab6-9ca5-44bd-b5a1-57975818b276 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.714631791Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efbb1ab6-9ca5-44bd-b5a1-57975818b276 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.714866290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722726687649144408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7c5e8e3bdac4eb3896e0799c1baf348b250f64611d70ada7c8a6b0877f753d,PodSandboxId:4047efed84d9c767349916183cb26ca9a0f6177b610811509eedaf85daacdadb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722726481986802246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482042517794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482018555464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b
05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722726470071171994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272646
4614751464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05a03627874a6aa94e9d20285c30c669224806570e94d22c65230790534d31e,PodSandboxId:4cd54ec3ddfec1ec9679b6a71cae1d9755b622cbd8f43376be77d700cb2eecc1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227264476
76391010,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a81668b751d16a05138cc2a943d8c72,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722726444756105758,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8,PodSandboxId:13cab867b36e126d02466fdbfb23ca5a1449155a529a2f55e75ba9e2580e9b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722726444760056725,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a,PodSandboxId:b468de169be0bee0efb7fd5d17ff307a33863a36c0fa53cb3e64ad2cff2b6c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722726444692902509,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722726444585984884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efbb1ab6-9ca5-44bd-b5a1-57975818b276 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.756157006Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be084192-38c5-4d16-b54c-000532f1152a name=/runtime.v1.RuntimeService/Version
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.756248285Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be084192-38c5-4d16-b54c-000532f1152a name=/runtime.v1.RuntimeService/Version
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.757805922Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc8403f5-4d8d-40ae-8b75-672ab444a237 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.758234816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722726966758215993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc8403f5-4d8d-40ae-8b75-672ab444a237 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.758932728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=927a9af3-d235-403b-aa00-fb144fc3d970 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.759009809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=927a9af3-d235-403b-aa00-fb144fc3d970 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.759343728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722726687649144408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7c5e8e3bdac4eb3896e0799c1baf348b250f64611d70ada7c8a6b0877f753d,PodSandboxId:4047efed84d9c767349916183cb26ca9a0f6177b610811509eedaf85daacdadb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722726481986802246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482042517794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482018555464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b
05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722726470071171994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272646
4614751464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05a03627874a6aa94e9d20285c30c669224806570e94d22c65230790534d31e,PodSandboxId:4cd54ec3ddfec1ec9679b6a71cae1d9755b622cbd8f43376be77d700cb2eecc1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227264476
76391010,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a81668b751d16a05138cc2a943d8c72,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722726444756105758,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8,PodSandboxId:13cab867b36e126d02466fdbfb23ca5a1449155a529a2f55e75ba9e2580e9b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722726444760056725,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a,PodSandboxId:b468de169be0bee0efb7fd5d17ff307a33863a36c0fa53cb3e64ad2cff2b6c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722726444692902509,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722726444585984884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=927a9af3-d235-403b-aa00-fb144fc3d970 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.800139896Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=423b55ab-c3f2-41e2-967e-3ff08fbb8681 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.800485983Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-9mswn,Uid:bb1d5016-7a80-440d-8d04-9c51a1c84199,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722726685025938808,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:11:24.713213530Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4047efed84d9c767349916183cb26ca9a0f6177b610811509eedaf85daacdadb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c98f9062-eff5-48e1-b260-7e8acf8df124,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1722726481762927833,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-03T23:08:01.437532580Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-g4nns,Uid:1c9c7190-c993-4b51-8ba6-62e3ab513836,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722726481757197627,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:08:01.441132640Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jm52b,Uid:65abad67-6b05-4dbb-8d33-723306bee46f,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1722726481755203843,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:08:01.431429857Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&PodSandboxMetadata{Name:kube-proxy-jvj96,Uid:cdb6273b-31a8-48bc-8c5a-010363fc2a96,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722726464446461403,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-08-03T23:07:43.515214901Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&PodSandboxMetadata{Name:kindnet-bpdht,Uid:156017b0-941c-4b32-a73c-4798d48e5434,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722726464441028300,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:07:43.506262420Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:13cab867b36e126d02466fdbfb23ca5a1449155a529a2f55e75ba9e2580e9b68,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-076508,Uid:5cfc7ccbbd8869f463d6c9d7f25c7b69,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1722726444451910940,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5cfc7ccbbd8869f463d6c9d7f25c7b69,kubernetes.io/config.seen: 2024-08-03T23:07:23.953739162Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4cd54ec3ddfec1ec9679b6a71cae1d9755b622cbd8f43376be77d700cb2eecc1,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-076508,Uid:1a81668b751d16a05138cc2a943d8c72,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722726444447143583,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a81668b751d16a05138cc2a943d8c72,},Annotations:map[string]string{kube
rnetes.io/config.hash: 1a81668b751d16a05138cc2a943d8c72,kubernetes.io/config.seen: 2024-08-03T23:07:23.953735409Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b468de169be0bee0efb7fd5d17ff307a33863a36c0fa53cb3e64ad2cff2b6c88,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-076508,Uid:03b5e048885e5fea318d5f49c66398f7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722726444425862140,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.154:8443,kubernetes.io/config.hash: 03b5e048885e5fea318d5f49c66398f7,kubernetes.io/config.seen: 2024-08-03T23:07:23.953738115Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bf23341fb90dfbb23de40998a0663d7dc3a3614d5
110341e2e73b4cac65f2bbb,Metadata:&PodSandboxMetadata{Name:etcd-ha-076508,Uid:a8200b39f80bd8260f39151e31b90485,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722726444420175425,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.154:2379,kubernetes.io/config.hash: a8200b39f80bd8260f39151e31b90485,kubernetes.io/config.seen: 2024-08-03T23:07:23.953736920Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-076508,Uid:bb662281698a59578ac55a71345bbdf9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722726444414224428,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bb662281698a59578ac55a71345bbdf9,kubernetes.io/config.seen: 2024-08-03T23:07:23.953730779Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=423b55ab-c3f2-41e2-967e-3ff08fbb8681 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.801065680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95c7ba0b-134c-4a36-8358-6fabd92268d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.801175729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95c7ba0b-134c-4a36-8358-6fabd92268d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.801603296Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722726687649144408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7c5e8e3bdac4eb3896e0799c1baf348b250f64611d70ada7c8a6b0877f753d,PodSandboxId:4047efed84d9c767349916183cb26ca9a0f6177b610811509eedaf85daacdadb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722726481986802246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482042517794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482018555464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b
05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722726470071171994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272646
4614751464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05a03627874a6aa94e9d20285c30c669224806570e94d22c65230790534d31e,PodSandboxId:4cd54ec3ddfec1ec9679b6a71cae1d9755b622cbd8f43376be77d700cb2eecc1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227264476
76391010,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a81668b751d16a05138cc2a943d8c72,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722726444756105758,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8,PodSandboxId:13cab867b36e126d02466fdbfb23ca5a1449155a529a2f55e75ba9e2580e9b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722726444760056725,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a,PodSandboxId:b468de169be0bee0efb7fd5d17ff307a33863a36c0fa53cb3e64ad2cff2b6c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722726444692902509,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722726444585984884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95c7ba0b-134c-4a36-8358-6fabd92268d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.801603296Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722726687649144408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7c5e8e3bdac4eb3896e0799c1baf348b250f64611d70ada7c8a6b0877f753d,PodSandboxId:4047efed84d9c767349916183cb26ca9a0f6177b610811509eedaf85daacdadb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722726481986802246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482042517794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482018555464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b
05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722726470071171994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272646
4614751464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05a03627874a6aa94e9d20285c30c669224806570e94d22c65230790534d31e,PodSandboxId:4cd54ec3ddfec1ec9679b6a71cae1d9755b622cbd8f43376be77d700cb2eecc1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227264476
76391010,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a81668b751d16a05138cc2a943d8c72,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722726444756105758,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8,PodSandboxId:13cab867b36e126d02466fdbfb23ca5a1449155a529a2f55e75ba9e2580e9b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722726444760056725,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a,PodSandboxId:b468de169be0bee0efb7fd5d17ff307a33863a36c0fa53cb3e64ad2cff2b6c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722726444692902509,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722726444585984884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95c7ba0b-134c-4a36-8358-6fabd92268d0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.802059100Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c893f94c-8414-4166-b001-a8a408ccecb3 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.802107286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c893f94c-8414-4166-b001-a8a408ccecb3 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.803648908Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=706ac648-f71d-4afb-ac55-37d7b503709d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.804069875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722726966804050292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=706ac648-f71d-4afb-ac55-37d7b503709d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.804473206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2e915ec-5b5e-42c1-bc03-284d21ddbe3c name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.804533689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2e915ec-5b5e-42c1-bc03-284d21ddbe3c name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:16:06 ha-076508 crio[678]: time="2024-08-03 23:16:06.804811528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722726687649144408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7c5e8e3bdac4eb3896e0799c1baf348b250f64611d70ada7c8a6b0877f753d,PodSandboxId:4047efed84d9c767349916183cb26ca9a0f6177b610811509eedaf85daacdadb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722726481986802246,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482042517794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722726482018555464,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b
05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722726470071171994,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272646
4614751464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d05a03627874a6aa94e9d20285c30c669224806570e94d22c65230790534d31e,PodSandboxId:4cd54ec3ddfec1ec9679b6a71cae1d9755b622cbd8f43376be77d700cb2eecc1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227264476
76391010,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a81668b751d16a05138cc2a943d8c72,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722726444756105758,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8,PodSandboxId:13cab867b36e126d02466fdbfb23ca5a1449155a529a2f55e75ba9e2580e9b68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722726444760056725,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a,PodSandboxId:b468de169be0bee0efb7fd5d17ff307a33863a36c0fa53cb3e64ad2cff2b6c88,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722726444692902509,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722726444585984884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2e915ec-5b5e-42c1-bc03-284d21ddbe3c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bf2cd88f9d490       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   5999015810d66       busybox-fc5497c4f-9mswn
	e4d2591ba7d5b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   ce24a7aa66e68       coredns-7db6d8ff4d-g4nns
	06304cb4cc30c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   b802406e46b4c       coredns-7db6d8ff4d-jm52b
	6f7c5e8e3bdac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       0                   4047efed84d9c       storage-provisioner
	992a3ac9b52e9       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago       Running             kindnet-cni               0                   f61ecf195fc7f       kindnet-bpdht
	c3100c43f706e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago       Running             kube-proxy                0                   9f02c76f5b54a       kube-proxy-jvj96
	d05a03627874a       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   4cd54ec3ddfec       kube-vip-ha-076508
	1e30a0cbac1a3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago       Running             kube-controller-manager   0                   13cab867b36e1       kube-controller-manager-ha-076508
	94ea41effc5da       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago       Running             kube-scheduler            0                   893b2ee90e13f       kube-scheduler-ha-076508
	4ce5fe2a1f3aa       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago       Running             kube-apiserver            0                   b468de169be0b       kube-apiserver-ha-076508
	f127531f146d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   bf23341fb90df       etcd-ha-076508
	
	
	==> coredns [06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff] <==
	[INFO] 10.244.0.4:41384 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219271s
	[INFO] 10.244.0.4:40191 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230574s
	[INFO] 10.244.0.4:59881 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008692718s
	[INFO] 10.244.0.4:47621 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107904s
	[INFO] 10.244.0.4:38085 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119908s
	[INFO] 10.244.2.2:54633 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00219105s
	[INFO] 10.244.2.2:54240 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087449s
	[INFO] 10.244.1.2:44472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000182942s
	[INFO] 10.244.1.2:54284 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00020417s
	[INFO] 10.244.1.2:35720 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113124s
	[INFO] 10.244.1.2:49197 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125884s
	[INFO] 10.244.1.2:42019 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010825s
	[INFO] 10.244.1.2:36505 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000274487s
	[INFO] 10.244.0.4:53634 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092514s
	[INFO] 10.244.0.4:37869 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148859s
	[INFO] 10.244.0.4:34409 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007386s
	[INFO] 10.244.2.2:37127 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00023425s
	[INFO] 10.244.1.2:45090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198771s
	[INFO] 10.244.1.2:35116 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097607s
	[INFO] 10.244.0.4:54156 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000252361s
	[INFO] 10.244.0.4:56228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118127s
	[INFO] 10.244.2.2:40085 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113887s
	[INFO] 10.244.2.2:41147 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160253s
	[INFO] 10.244.1.2:34773 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000224176s
	[INFO] 10.244.1.2:41590 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094468s
	
	
	==> coredns [e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac] <==
	[INFO] 10.244.2.2:40415 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000128857s
	[INFO] 10.244.2.2:55624 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001648548s
	[INFO] 10.244.1.2:49499 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129181s
	[INFO] 10.244.0.4:35373 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087731s
	[INFO] 10.244.0.4:34194 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127612s
	[INFO] 10.244.2.2:55281 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134091s
	[INFO] 10.244.2.2:54805 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174016s
	[INFO] 10.244.2.2:57182 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000169629s
	[INFO] 10.244.2.2:60918 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001430378s
	[INFO] 10.244.2.2:56177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124595s
	[INFO] 10.244.2.2:37833 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009163s
	[INFO] 10.244.1.2:59379 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001877956s
	[INFO] 10.244.1.2:55115 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00153877s
	[INFO] 10.244.0.4:60770 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073925s
	[INFO] 10.244.2.2:35733 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000194756s
	[INFO] 10.244.2.2:41572 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113187s
	[INFO] 10.244.2.2:56390 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000128625s
	[INFO] 10.244.1.2:57417 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136164s
	[INFO] 10.244.1.2:45630 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074866s
	[INFO] 10.244.0.4:56762 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117476s
	[INFO] 10.244.0.4:47543 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000144888s
	[INFO] 10.244.2.2:48453 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203019s
	[INFO] 10.244.2.2:47323 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155497s
	[INFO] 10.244.1.2:55651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193064s
	[INFO] 10.244.1.2:54565 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106172s
	
	
	==> describe nodes <==
	Name:               ha-076508
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_07_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:16:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:11:35 +0000   Sat, 03 Aug 2024 23:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:11:35 +0000   Sat, 03 Aug 2024 23:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:11:35 +0000   Sat, 03 Aug 2024 23:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:11:35 +0000   Sat, 03 Aug 2024 23:08:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.154
	  Hostname:    ha-076508
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f520408175b740ceb19f810f6b0739d9
	  System UUID:                f5204081-75b7-40ce-b19f-810f6b0739d9
	  Boot ID:                    1b5fc419-04f3-4085-a948-6aee54d39a0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9mswn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 coredns-7db6d8ff4d-g4nns             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m24s
	  kube-system                 coredns-7db6d8ff4d-jm52b             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m24s
	  kube-system                 etcd-ha-076508                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m37s
	  kube-system                 kindnet-bpdht                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m24s
	  kube-system                 kube-apiserver-ha-076508             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-controller-manager-ha-076508    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-proxy-jvj96                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-scheduler-ha-076508             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-vip-ha-076508                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m22s  kube-proxy       
	  Normal  Starting                 8m37s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m37s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m37s  kubelet          Node ha-076508 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s  kubelet          Node ha-076508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s  kubelet          Node ha-076508 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m25s  node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal  NodeReady                8m6s   kubelet          Node ha-076508 status is now: NodeReady
	  Normal  RegisteredNode           6m8s   node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal  RegisteredNode           4m54s  node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	
	
	Name:               ha-076508-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_09_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:09:41 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:12:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 03 Aug 2024 23:11:44 +0000   Sat, 03 Aug 2024 23:13:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 03 Aug 2024 23:11:44 +0000   Sat, 03 Aug 2024 23:13:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 03 Aug 2024 23:11:44 +0000   Sat, 03 Aug 2024 23:13:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 03 Aug 2024 23:11:44 +0000   Sat, 03 Aug 2024 23:13:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-076508-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e37b92099f364fcfb7894de373a13dc0
	  System UUID:                e37b9209-9f36-4fcf-b789-4de373a13dc0
	  Boot ID:                    ce951a70-7d26-44f7-b876-80429f6067a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wlr2g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 etcd-ha-076508-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m24s
	  kube-system                 kindnet-kw254                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m26s
	  kube-system                 kube-apiserver-ha-076508-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-controller-manager-ha-076508-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-proxy-hkfgl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-scheduler-ha-076508-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-vip-ha-076508-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m26s (x8 over 6m26s)  kubelet          Node ha-076508-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x8 over 6m26s)  kubelet          Node ha-076508-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s (x7 over 6m26s)  kubelet          Node ha-076508-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m25s                  node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  RegisteredNode           6m8s                   node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  NodeNotReady             2m50s                  node-controller  Node ha-076508-m02 status is now: NodeNotReady
	
	
	Name:               ha-076508-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_10_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:10:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:16:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:11:57 +0000   Sat, 03 Aug 2024 23:10:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:11:57 +0000   Sat, 03 Aug 2024 23:10:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:11:57 +0000   Sat, 03 Aug 2024 23:10:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:11:57 +0000   Sat, 03 Aug 2024 23:11:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    ha-076508-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad0c4ebbd959429f966b637eb26caf62
	  System UUID:                ad0c4ebb-d959-429f-966b-637eb26caf62
	  Boot ID:                    48d495ed-b4cd-49d1-87cd-cac9c1cc8ea9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nfwfw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 etcd-ha-076508-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m10s
	  kube-system                 kindnet-tzzq4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m12s
	  kube-system                 kube-apiserver-ha-076508-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-controller-manager-ha-076508-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-proxy-7kmfh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-scheduler-ha-076508-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-vip-ha-076508-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m12s (x8 over 5m12s)  kubelet          Node ha-076508-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m12s (x8 over 5m12s)  kubelet          Node ha-076508-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m12s (x7 over 5m12s)  kubelet          Node ha-076508-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-076508-m03 event: Registered Node ha-076508-m03 in Controller
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-076508-m03 event: Registered Node ha-076508-m03 in Controller
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-076508-m03 event: Registered Node ha-076508-m03 in Controller
	
	
	Name:               ha-076508-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_12_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:16:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:12:33 +0000   Sat, 03 Aug 2024 23:12:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:12:33 +0000   Sat, 03 Aug 2024 23:12:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:12:33 +0000   Sat, 03 Aug 2024 23:12:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:12:33 +0000   Sat, 03 Aug 2024 23:12:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-076508-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 59e0fe8296564277a8f997ffad0b72b7
	  System UUID:                59e0fe82-9656-4277-a8f9-97ffad0b72b7
	  Boot ID:                    1e39986a-cb9b-4675-9bbc-a7bb913ff696
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hdkw5       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m5s
	  kube-system                 kube-proxy-ff944    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal  NodeHasSufficientMemory  4m5s (x2 over 4m5s)  kubelet          Node ha-076508-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x2 over 4m5s)  kubelet          Node ha-076508-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x2 over 4m5s)  kubelet          Node ha-076508-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal  NodeReady                3m44s                kubelet          Node ha-076508-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 3 23:06] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050902] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041428] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.797018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.674662] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Aug 3 23:07] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.547215] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.057969] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056174] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.182365] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.110609] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.279600] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.413542] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.061522] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.061905] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +1.335796] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.036158] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.075573] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.924842] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.636926] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 3 23:09] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e] <==
	{"level":"warn","ts":"2024-08-03T23:16:07.013486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.06103Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.089648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.10156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.106413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.133114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.14765Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.16082Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3cc90d899860a179","rtt":"1.25816ms","error":"dial tcp 192.168.39.245:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-03T23:16:07.160913Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.160975Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3cc90d899860a179","rtt":"12.208499ms","error":"dial tcp 192.168.39.245:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-03T23:16:07.168567Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.183565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.193568Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.204248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.232249Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.241484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.252824Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.257492Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.26138Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.294904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.297714Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.303442Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.309388Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.3224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:16:07.360779Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:16:07 up 9 min,  0 users,  load average: 0.10, 0.23, 0.13
	Linux ha-076508 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18] <==
	I0803 23:15:31.279209       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:15:41.274471       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:15:41.274591       1 main.go:299] handling current node
	I0803 23:15:41.274633       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:15:41.274652       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:15:41.274848       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:15:41.274876       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:15:41.274956       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:15:41.274975       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:15:51.270451       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:15:51.270606       1 main.go:299] handling current node
	I0803 23:15:51.270655       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:15:51.270661       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:15:51.270864       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:15:51.270890       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:15:51.270952       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:15:51.270972       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:16:01.276855       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:16:01.276957       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:16:01.277238       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:16:01.277265       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:16:01.277457       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:16:01.277491       1 main.go:299] handling current node
	I0803 23:16:01.277516       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:16:01.277521       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a] <==
	E0803 23:07:30.639758       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0803 23:07:30.640974       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0803 23:07:30.641029       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0803 23:07:30.642230       1 timeout.go:142] post-timeout activity - time-elapsed: 2.470085ms, POST "/api/v1/namespaces/kube-system/pods" result: <nil>
	I0803 23:07:30.901565       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0803 23:07:30.933757       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0803 23:07:30.947671       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0803 23:07:43.479194       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0803 23:07:43.763026       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0803 23:11:29.000941       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58346: use of closed network connection
	E0803 23:11:29.187412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58360: use of closed network connection
	E0803 23:11:29.383828       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58380: use of closed network connection
	E0803 23:11:29.600601       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58396: use of closed network connection
	E0803 23:11:29.788511       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58416: use of closed network connection
	E0803 23:11:30.003088       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58426: use of closed network connection
	E0803 23:11:30.184249       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58444: use of closed network connection
	E0803 23:11:30.366155       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58456: use of closed network connection
	E0803 23:11:30.541261       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58472: use of closed network connection
	E0803 23:11:30.860021       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58498: use of closed network connection
	E0803 23:11:31.087099       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58514: use of closed network connection
	E0803 23:11:31.263223       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58534: use of closed network connection
	E0803 23:11:31.445252       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58552: use of closed network connection
	E0803 23:11:31.626787       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58566: use of closed network connection
	E0803 23:11:31.816995       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58588: use of closed network connection
	W0803 23:12:59.473673       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.154 192.168.39.86]
	
	
	==> kube-controller-manager [1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8] <==
	I0803 23:10:55.447683       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-076508-m03" podCIDRs=["10.244.2.0/24"]
	I0803 23:10:57.878648       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-076508-m03"
	I0803 23:11:24.710531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.559026ms"
	I0803 23:11:24.810991       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.231938ms"
	I0803 23:11:24.941532       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="130.410072ms"
	I0803 23:11:25.185637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.040499ms"
	I0803 23:11:25.226083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.372653ms"
	I0803 23:11:25.227013       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="460.915µs"
	I0803 23:11:25.266351       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.953994ms"
	I0803 23:11:25.267357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="172.453µs"
	I0803 23:11:25.363589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="185.283µs"
	I0803 23:11:27.823545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.623105ms"
	I0803 23:11:27.823752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.914µs"
	I0803 23:11:28.300364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.189063ms"
	I0803 23:11:28.300760       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="305.671µs"
	I0803 23:11:28.543884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.853725ms"
	I0803 23:11:28.544416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.528µs"
	E0803 23:12:02.108031       1 certificate_controller.go:146] Sync csr-ccvdl failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-ccvdl": the object has been modified; please apply your changes to the latest version and try again
	I0803 23:12:02.399995       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-076508-m04\" does not exist"
	I0803 23:12:02.461701       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-076508-m04" podCIDRs=["10.244.3.0/24"]
	I0803 23:12:02.908837       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-076508-m04"
	I0803 23:12:23.393572       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076508-m04"
	I0803 23:13:17.940876       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076508-m04"
	I0803 23:13:17.983262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.178854ms"
	I0803 23:13:17.983744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.39µs"
	
	
	==> kube-proxy [c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa] <==
	I0803 23:07:44.832758       1 server_linux.go:69] "Using iptables proxy"
	I0803 23:07:44.852587       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
	I0803 23:07:44.934096       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 23:07:44.934142       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 23:07:44.934159       1 server_linux.go:165] "Using iptables Proxier"
	I0803 23:07:44.937787       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 23:07:44.938109       1 server.go:872] "Version info" version="v1.30.3"
	I0803 23:07:44.938153       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:07:44.940042       1 config.go:192] "Starting service config controller"
	I0803 23:07:44.940395       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 23:07:44.940465       1 config.go:101] "Starting endpoint slice config controller"
	I0803 23:07:44.940485       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 23:07:44.941457       1 config.go:319] "Starting node config controller"
	I0803 23:07:44.942631       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 23:07:45.041527       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 23:07:45.041552       1 shared_informer.go:320] Caches are synced for service config
	I0803 23:07:45.043109       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf] <==
	W0803 23:07:28.671970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:07:28.672088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0803 23:07:28.780566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 23:07:28.780614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0803 23:07:28.783590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0803 23:07:28.783671       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0803 23:07:28.807701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0803 23:07:28.807746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0803 23:07:28.893343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0803 23:07:28.893449       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0803 23:07:29.242730       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:07:29.243403       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0803 23:07:32.091551       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0803 23:11:24.672112       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-nfwfw\": pod busybox-fc5497c4f-nfwfw is already assigned to node \"ha-076508-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-nfwfw" node="ha-076508-m03"
	E0803 23:11:24.672328       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a132b6e0-614a-4aaa-b1f6-b11bdf6a0fc0(default/busybox-fc5497c4f-nfwfw) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-nfwfw"
	E0803 23:11:24.672373       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-nfwfw\": pod busybox-fc5497c4f-nfwfw is already assigned to node \"ha-076508-m03\"" pod="default/busybox-fc5497c4f-nfwfw"
	I0803 23:11:24.672440       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-nfwfw" node="ha-076508-m03"
	E0803 23:11:24.708174       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wlr2g\": pod busybox-fc5497c4f-wlr2g is already assigned to node \"ha-076508-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wlr2g" node="ha-076508-m02"
	E0803 23:11:24.717435       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5cc9bc14-7454-4e5b-9dfc-c7702f42323b(default/busybox-fc5497c4f-wlr2g) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wlr2g"
	E0803 23:11:24.725004       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wlr2g\": pod busybox-fc5497c4f-wlr2g is already assigned to node \"ha-076508-m02\"" pod="default/busybox-fc5497c4f-wlr2g"
	I0803 23:11:24.725485       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wlr2g" node="ha-076508-m02"
	E0803 23:12:02.482595       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ksmxp\": pod kindnet-ksmxp is already assigned to node \"ha-076508-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ksmxp" node="ha-076508-m04"
	E0803 23:12:02.482703       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 415811c8-0f4b-44c3-954e-8e56747d8462(kube-system/kindnet-ksmxp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ksmxp"
	E0803 23:12:02.482727       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ksmxp\": pod kindnet-ksmxp is already assigned to node \"ha-076508-m04\"" pod="kube-system/kindnet-ksmxp"
	I0803 23:12:02.482788       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ksmxp" node="ha-076508-m04"
	
	
	==> kubelet <==
	Aug 03 23:11:30 ha-076508 kubelet[1368]: E0803 23:11:30.852522    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:11:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:11:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:11:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:11:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:12:30 ha-076508 kubelet[1368]: E0803 23:12:30.836936    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:12:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:12:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:12:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:12:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:13:30 ha-076508 kubelet[1368]: E0803 23:13:30.841062    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:13:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:13:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:13:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:13:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:14:30 ha-076508 kubelet[1368]: E0803 23:14:30.838655    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:14:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:14:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:14:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:14:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:15:30 ha-076508 kubelet[1368]: E0803 23:15:30.837560    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:15:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:15:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:15:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:15:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-076508 -n ha-076508
helpers_test.go:261: (dbg) Run:  kubectl --context ha-076508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (59.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (383.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-076508 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-076508 -v=7 --alsologtostderr
E0803 23:16:25.692145   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-076508 -v=7 --alsologtostderr: exit status 82 (2m1.924615645s)

                                                
                                                
-- stdout --
	* Stopping node "ha-076508-m04"  ...
	* Stopping node "ha-076508-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:16:08.845496   34331 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:16:08.845761   34331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:16:08.845771   34331 out.go:304] Setting ErrFile to fd 2...
	I0803 23:16:08.845776   34331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:16:08.845956   34331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:16:08.846189   34331 out.go:298] Setting JSON to false
	I0803 23:16:08.846273   34331 mustload.go:65] Loading cluster: ha-076508
	I0803 23:16:08.846612   34331 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:16:08.846691   34331 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:16:08.846870   34331 mustload.go:65] Loading cluster: ha-076508
	I0803 23:16:08.846999   34331 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:16:08.847033   34331 stop.go:39] StopHost: ha-076508-m04
	I0803 23:16:08.847412   34331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:16:08.847458   34331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:16:08.862791   34331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36219
	I0803 23:16:08.863347   34331 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:16:08.863945   34331 main.go:141] libmachine: Using API Version  1
	I0803 23:16:08.863973   34331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:16:08.864289   34331 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:16:08.867044   34331 out.go:177] * Stopping node "ha-076508-m04"  ...
	I0803 23:16:08.868445   34331 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0803 23:16:08.868478   34331 main.go:141] libmachine: (ha-076508-m04) Calling .DriverName
	I0803 23:16:08.868753   34331 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0803 23:16:08.868777   34331 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHHostname
	I0803 23:16:08.871575   34331 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:16:08.872086   34331 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:11:47 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:16:08.872123   34331 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:16:08.872311   34331 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHPort
	I0803 23:16:08.872493   34331 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHKeyPath
	I0803 23:16:08.872662   34331 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHUsername
	I0803 23:16:08.872832   34331 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m04/id_rsa Username:docker}
	I0803 23:16:08.960676   34331 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0803 23:16:09.018420   34331 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0803 23:16:09.071998   34331 main.go:141] libmachine: Stopping "ha-076508-m04"...
	I0803 23:16:09.072037   34331 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:16:09.073741   34331 main.go:141] libmachine: (ha-076508-m04) Calling .Stop
	I0803 23:16:09.077288   34331 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 0/120
	I0803 23:16:10.309627   34331 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:16:10.310948   34331 main.go:141] libmachine: Machine "ha-076508-m04" was stopped.
	I0803 23:16:10.310962   34331 stop.go:75] duration metric: took 1.4425238s to stop
	I0803 23:16:10.310980   34331 stop.go:39] StopHost: ha-076508-m03
	I0803 23:16:10.311303   34331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:16:10.311351   34331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:16:10.327084   34331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37983
	I0803 23:16:10.327509   34331 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:16:10.327968   34331 main.go:141] libmachine: Using API Version  1
	I0803 23:16:10.327992   34331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:16:10.328305   34331 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:16:10.330565   34331 out.go:177] * Stopping node "ha-076508-m03"  ...
	I0803 23:16:10.332078   34331 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0803 23:16:10.332110   34331 main.go:141] libmachine: (ha-076508-m03) Calling .DriverName
	I0803 23:16:10.332318   34331 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0803 23:16:10.332340   34331 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHHostname
	I0803 23:16:10.335243   34331 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:16:10.335705   34331 main.go:141] libmachine: (ha-076508-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:20:c2", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:10:22 +0000 UTC Type:0 Mac:52:54:00:f0:20:c2 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-076508-m03 Clientid:01:52:54:00:f0:20:c2}
	I0803 23:16:10.335735   34331 main.go:141] libmachine: (ha-076508-m03) DBG | domain ha-076508-m03 has defined IP address 192.168.39.86 and MAC address 52:54:00:f0:20:c2 in network mk-ha-076508
	I0803 23:16:10.335877   34331 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHPort
	I0803 23:16:10.336045   34331 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHKeyPath
	I0803 23:16:10.336173   34331 main.go:141] libmachine: (ha-076508-m03) Calling .GetSSHUsername
	I0803 23:16:10.336288   34331 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m03/id_rsa Username:docker}
	I0803 23:16:10.416876   34331 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0803 23:16:10.470881   34331 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0803 23:16:10.526247   34331 main.go:141] libmachine: Stopping "ha-076508-m03"...
	I0803 23:16:10.526271   34331 main.go:141] libmachine: (ha-076508-m03) Calling .GetState
	I0803 23:16:10.527854   34331 main.go:141] libmachine: (ha-076508-m03) Calling .Stop
	I0803 23:16:10.531469   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 0/120
	I0803 23:16:11.532927   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 1/120
	I0803 23:16:12.534182   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 2/120
	I0803 23:16:13.535380   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 3/120
	I0803 23:16:14.536631   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 4/120
	I0803 23:16:15.538865   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 5/120
	I0803 23:16:16.540683   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 6/120
	I0803 23:16:17.542305   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 7/120
	I0803 23:16:18.544043   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 8/120
	I0803 23:16:19.545451   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 9/120
	I0803 23:16:20.547465   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 10/120
	I0803 23:16:21.548900   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 11/120
	I0803 23:16:22.550562   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 12/120
	I0803 23:16:23.552759   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 13/120
	I0803 23:16:24.554189   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 14/120
	I0803 23:16:25.555690   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 15/120
	I0803 23:16:26.557209   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 16/120
	I0803 23:16:27.558666   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 17/120
	I0803 23:16:28.560343   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 18/120
	I0803 23:16:29.562246   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 19/120
	I0803 23:16:30.564418   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 20/120
	I0803 23:16:31.566089   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 21/120
	I0803 23:16:32.567811   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 22/120
	I0803 23:16:33.569499   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 23/120
	I0803 23:16:34.571050   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 24/120
	I0803 23:16:35.573223   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 25/120
	I0803 23:16:36.574467   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 26/120
	I0803 23:16:37.576140   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 27/120
	I0803 23:16:38.577650   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 28/120
	I0803 23:16:39.579149   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 29/120
	I0803 23:16:40.580455   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 30/120
	I0803 23:16:41.582166   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 31/120
	I0803 23:16:42.583623   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 32/120
	I0803 23:16:43.585122   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 33/120
	I0803 23:16:44.586496   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 34/120
	I0803 23:16:45.587689   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 35/120
	I0803 23:16:46.589194   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 36/120
	I0803 23:16:47.590728   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 37/120
	I0803 23:16:48.592125   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 38/120
	I0803 23:16:49.593456   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 39/120
	I0803 23:16:50.595308   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 40/120
	I0803 23:16:51.596834   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 41/120
	I0803 23:16:52.598333   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 42/120
	I0803 23:16:53.599812   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 43/120
	I0803 23:16:54.601399   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 44/120
	I0803 23:16:55.603126   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 45/120
	I0803 23:16:56.604626   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 46/120
	I0803 23:16:57.606214   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 47/120
	I0803 23:16:58.607576   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 48/120
	I0803 23:16:59.609050   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 49/120
	I0803 23:17:00.610889   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 50/120
	I0803 23:17:01.612448   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 51/120
	I0803 23:17:02.613869   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 52/120
	I0803 23:17:03.615480   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 53/120
	I0803 23:17:04.616796   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 54/120
	I0803 23:17:05.618553   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 55/120
	I0803 23:17:06.619943   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 56/120
	I0803 23:17:07.621572   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 57/120
	I0803 23:17:08.622910   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 58/120
	I0803 23:17:09.624566   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 59/120
	I0803 23:17:10.626510   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 60/120
	I0803 23:17:11.627747   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 61/120
	I0803 23:17:12.628996   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 62/120
	I0803 23:17:13.630181   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 63/120
	I0803 23:17:14.631627   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 64/120
	I0803 23:17:15.633213   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 65/120
	I0803 23:17:16.634480   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 66/120
	I0803 23:17:17.635788   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 67/120
	I0803 23:17:18.637317   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 68/120
	I0803 23:17:19.638566   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 69/120
	I0803 23:17:20.640252   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 70/120
	I0803 23:17:21.641696   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 71/120
	I0803 23:17:22.643047   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 72/120
	I0803 23:17:23.644218   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 73/120
	I0803 23:17:24.646221   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 74/120
	I0803 23:17:25.648032   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 75/120
	I0803 23:17:26.649393   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 76/120
	I0803 23:17:27.650626   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 77/120
	I0803 23:17:28.651990   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 78/120
	I0803 23:17:29.653504   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 79/120
	I0803 23:17:30.655573   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 80/120
	I0803 23:17:31.657277   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 81/120
	I0803 23:17:32.658715   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 82/120
	I0803 23:17:33.660110   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 83/120
	I0803 23:17:34.661669   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 84/120
	I0803 23:17:35.663404   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 85/120
	I0803 23:17:36.664747   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 86/120
	I0803 23:17:37.666178   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 87/120
	I0803 23:17:38.668470   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 88/120
	I0803 23:17:39.669674   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 89/120
	I0803 23:17:40.671354   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 90/120
	I0803 23:17:41.672716   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 91/120
	I0803 23:17:42.674241   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 92/120
	I0803 23:17:43.675746   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 93/120
	I0803 23:17:44.676935   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 94/120
	I0803 23:17:45.678513   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 95/120
	I0803 23:17:46.679741   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 96/120
	I0803 23:17:47.681027   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 97/120
	I0803 23:17:48.682341   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 98/120
	I0803 23:17:49.683832   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 99/120
	I0803 23:17:50.685532   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 100/120
	I0803 23:17:51.687894   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 101/120
	I0803 23:17:52.689069   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 102/120
	I0803 23:17:53.690460   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 103/120
	I0803 23:17:54.691602   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 104/120
	I0803 23:17:55.693097   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 105/120
	I0803 23:17:56.694263   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 106/120
	I0803 23:17:57.695614   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 107/120
	I0803 23:17:58.697684   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 108/120
	I0803 23:17:59.700030   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 109/120
	I0803 23:18:00.701839   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 110/120
	I0803 23:18:01.703896   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 111/120
	I0803 23:18:02.705150   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 112/120
	I0803 23:18:03.706538   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 113/120
	I0803 23:18:04.707653   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 114/120
	I0803 23:18:05.708885   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 115/120
	I0803 23:18:06.710076   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 116/120
	I0803 23:18:07.711370   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 117/120
	I0803 23:18:08.712685   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 118/120
	I0803 23:18:09.713923   34331 main.go:141] libmachine: (ha-076508-m03) Waiting for machine to stop 119/120
	I0803 23:18:10.715005   34331 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0803 23:18:10.715069   34331 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0803 23:18:10.716956   34331 out.go:177] 
	W0803 23:18:10.718252   34331 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0803 23:18:10.718268   34331 out.go:239] * 
	* 
	W0803 23:18:10.720423   34331 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 23:18:10.724631   34331 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-076508 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-076508 --wait=true -v=7 --alsologtostderr
E0803 23:18:27.619300   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:19:50.662549   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:20:58.007924   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-076508 --wait=true -v=7 --alsologtostderr: (4m18.445747779s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-076508
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-076508 -n ha-076508
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-076508 logs -n 25: (1.985997266s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m02:/home/docker/cp-test_ha-076508-m03_ha-076508-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m02 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m03_ha-076508-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04:/home/docker/cp-test_ha-076508-m03_ha-076508-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m04 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m03_ha-076508-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-076508 cp testdata/cp-test.txt                                               | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile214764297/001/cp-test_ha-076508-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508:/home/docker/cp-test_ha-076508-m04_ha-076508.txt                      |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508 sudo cat                                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m04_ha-076508.txt                                |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m02:/home/docker/cp-test_ha-076508-m04_ha-076508-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m02 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m04_ha-076508-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03:/home/docker/cp-test_ha-076508-m04_ha-076508-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m03 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m04_ha-076508-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-076508 node stop m02 -v=7                                                    | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-076508 node start m02 -v=7                                                   | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:15 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-076508 -v=7                                                          | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:16 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-076508 -v=7                                                               | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:16 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-076508 --wait=true -v=7                                                   | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:18 UTC | 03 Aug 24 23:22 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-076508                                                               | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:22 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:18:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:18:10.772185   35217 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:18:10.772415   35217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:18:10.772423   35217 out.go:304] Setting ErrFile to fd 2...
	I0803 23:18:10.772427   35217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:18:10.772611   35217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:18:10.773151   35217 out.go:298] Setting JSON to false
	I0803 23:18:10.774126   35217 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3635,"bootTime":1722723456,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:18:10.774187   35217 start.go:139] virtualization: kvm guest
	I0803 23:18:10.779445   35217 out.go:177] * [ha-076508] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:18:10.780986   35217 notify.go:220] Checking for updates...
	I0803 23:18:10.781036   35217 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:18:10.782487   35217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:18:10.783900   35217 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:18:10.784978   35217 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:18:10.786219   35217 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:18:10.787608   35217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:18:10.789142   35217 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:18:10.789226   35217 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:18:10.789708   35217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:18:10.789766   35217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:18:10.804359   35217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46709
	I0803 23:18:10.804867   35217 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:18:10.805479   35217 main.go:141] libmachine: Using API Version  1
	I0803 23:18:10.805501   35217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:18:10.805803   35217 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:18:10.805996   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:18:10.841624   35217 out.go:177] * Using the kvm2 driver based on existing profile
	I0803 23:18:10.842852   35217 start.go:297] selected driver: kvm2
	I0803 23:18:10.842864   35217 start.go:901] validating driver "kvm2" against &{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:18:10.842990   35217 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:18:10.843305   35217 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:18:10.843374   35217 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:18:10.859348   35217 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:18:10.860095   35217 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:18:10.860156   35217 cni.go:84] Creating CNI manager for ""
	I0803 23:18:10.860169   35217 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0803 23:18:10.860231   35217 start.go:340] cluster config:
	{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:18:10.860343   35217 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:18:10.862440   35217 out.go:177] * Starting "ha-076508" primary control-plane node in "ha-076508" cluster
	I0803 23:18:10.863723   35217 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:18:10.863767   35217 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:18:10.863775   35217 cache.go:56] Caching tarball of preloaded images
	I0803 23:18:10.863880   35217 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:18:10.863891   35217 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:18:10.864004   35217 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:18:10.864195   35217 start.go:360] acquireMachinesLock for ha-076508: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:18:10.864238   35217 start.go:364] duration metric: took 23.093µs to acquireMachinesLock for "ha-076508"
	I0803 23:18:10.864252   35217 start.go:96] Skipping create...Using existing machine configuration
	I0803 23:18:10.864259   35217 fix.go:54] fixHost starting: 
	I0803 23:18:10.864534   35217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:18:10.864562   35217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:18:10.880151   35217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39793
	I0803 23:18:10.880560   35217 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:18:10.881082   35217 main.go:141] libmachine: Using API Version  1
	I0803 23:18:10.881113   35217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:18:10.881474   35217 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:18:10.881694   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:18:10.881861   35217 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:18:10.883590   35217 fix.go:112] recreateIfNeeded on ha-076508: state=Running err=<nil>
	W0803 23:18:10.883608   35217 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 23:18:10.885653   35217 out.go:177] * Updating the running kvm2 "ha-076508" VM ...
	I0803 23:18:10.887082   35217 machine.go:94] provisionDockerMachine start ...
	I0803 23:18:10.887104   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:18:10.887314   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:18:10.889840   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:10.890295   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:10.890322   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:10.890467   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:18:10.890643   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:10.890815   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:10.890956   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:18:10.891125   35217 main.go:141] libmachine: Using SSH client type: native
	I0803 23:18:10.891302   35217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:18:10.891313   35217 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 23:18:11.006636   35217 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076508
	
	I0803 23:18:11.006673   35217 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:18:11.006948   35217 buildroot.go:166] provisioning hostname "ha-076508"
	I0803 23:18:11.006974   35217 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:18:11.007237   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:18:11.009895   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.010267   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:11.010294   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.010514   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:18:11.010705   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.010871   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.011000   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:18:11.011173   35217 main.go:141] libmachine: Using SSH client type: native
	I0803 23:18:11.011388   35217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:18:11.011406   35217 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076508 && echo "ha-076508" | sudo tee /etc/hostname
	I0803 23:18:11.147566   35217 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076508
	
	I0803 23:18:11.147596   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:18:11.150388   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.150710   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:11.150754   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.150879   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:18:11.151081   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.151201   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.151347   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:18:11.151498   35217 main.go:141] libmachine: Using SSH client type: native
	I0803 23:18:11.151693   35217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:18:11.151713   35217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076508/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:18:11.266305   35217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:18:11.266334   35217 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 23:18:11.266355   35217 buildroot.go:174] setting up certificates
	I0803 23:18:11.266386   35217 provision.go:84] configureAuth start
	I0803 23:18:11.266407   35217 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:18:11.266688   35217 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:18:11.269464   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.269868   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:11.269913   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.270068   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:18:11.272260   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.272625   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:11.272647   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.272788   35217 provision.go:143] copyHostCerts
	I0803 23:18:11.272814   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:18:11.272842   35217 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0803 23:18:11.272849   35217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:18:11.272913   35217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 23:18:11.273004   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:18:11.273022   35217 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0803 23:18:11.273029   35217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:18:11.273052   35217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 23:18:11.273152   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:18:11.273170   35217 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0803 23:18:11.273174   35217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:18:11.273203   35217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 23:18:11.273256   35217 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.ha-076508 san=[127.0.0.1 192.168.39.154 ha-076508 localhost minikube]
	I0803 23:18:11.566185   35217 provision.go:177] copyRemoteCerts
	I0803 23:18:11.566260   35217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:18:11.566281   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:18:11.569701   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.570079   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:11.570107   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.570342   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:18:11.570577   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.570853   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:18:11.571022   35217 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:18:11.657388   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:18:11.657483   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 23:18:11.689100   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:18:11.689177   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0803 23:18:11.732018   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:18:11.732089   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:18:11.759569   35217 provision.go:87] duration metric: took 493.171292ms to configureAuth
	I0803 23:18:11.759595   35217 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:18:11.759783   35217 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:18:11.759843   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:18:11.762680   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.763170   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:11.763196   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.763394   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:18:11.763580   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.763736   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.763896   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:18:11.764048   35217 main.go:141] libmachine: Using SSH client type: native
	I0803 23:18:11.764248   35217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:18:11.764271   35217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:19:42.561598   35217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:19:42.561625   35217 machine.go:97] duration metric: took 1m31.674527918s to provisionDockerMachine
	I0803 23:19:42.561639   35217 start.go:293] postStartSetup for "ha-076508" (driver="kvm2")
	I0803 23:19:42.561652   35217 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:19:42.561669   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:19:42.561998   35217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:19:42.562031   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:19:42.565091   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.565565   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:42.565586   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.565758   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:19:42.565959   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:19:42.566133   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:19:42.566285   35217 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:19:42.652205   35217 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:19:42.656803   35217 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:19:42.656837   35217 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 23:19:42.656906   35217 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 23:19:42.656994   35217 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0803 23:19:42.657006   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0803 23:19:42.657108   35217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:19:42.666934   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:19:42.692079   35217 start.go:296] duration metric: took 130.427767ms for postStartSetup
	I0803 23:19:42.692120   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:19:42.692390   35217 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0803 23:19:42.692412   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:19:42.695019   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.695479   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:42.695505   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.695654   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:19:42.695831   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:19:42.696013   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:19:42.696165   35217 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	W0803 23:19:42.780089   35217 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0803 23:19:42.780115   35217 fix.go:56] duration metric: took 1m31.915855312s for fixHost
	I0803 23:19:42.780140   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:19:42.782497   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.782787   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:42.782814   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.782972   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:19:42.783169   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:19:42.783332   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:19:42.783455   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:19:42.783627   35217 main.go:141] libmachine: Using SSH client type: native
	I0803 23:19:42.783825   35217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:19:42.783840   35217 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:19:42.894490   35217 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722727182.857481475
	
	I0803 23:19:42.894522   35217 fix.go:216] guest clock: 1722727182.857481475
	I0803 23:19:42.894534   35217 fix.go:229] Guest: 2024-08-03 23:19:42.857481475 +0000 UTC Remote: 2024-08-03 23:19:42.780124002 +0000 UTC m=+92.043524146 (delta=77.357473ms)
	I0803 23:19:42.894561   35217 fix.go:200] guest clock delta is within tolerance: 77.357473ms
	I0803 23:19:42.894569   35217 start.go:83] releasing machines lock for "ha-076508", held for 1m32.0303221s
	I0803 23:19:42.894598   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:19:42.894861   35217 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:19:42.897697   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.898097   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:42.898120   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.898274   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:19:42.898775   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:19:42.898936   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:19:42.899002   35217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:19:42.899029   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:19:42.899150   35217 ssh_runner.go:195] Run: cat /version.json
	I0803 23:19:42.899170   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:19:42.901730   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.901972   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.902120   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:42.902158   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.902236   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:19:42.902370   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:42.902396   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.902417   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:19:42.902587   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:19:42.902602   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:19:42.902862   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:19:42.902882   35217 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:19:42.903033   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:19:42.903142   35217 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:19:43.002212   35217 ssh_runner.go:195] Run: systemctl --version
	I0803 23:19:43.008486   35217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:19:43.173022   35217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:19:43.179475   35217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:19:43.179553   35217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:19:43.189863   35217 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0803 23:19:43.189888   35217 start.go:495] detecting cgroup driver to use...
	I0803 23:19:43.189955   35217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:19:43.208212   35217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:19:43.222707   35217 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:19:43.222781   35217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:19:43.237429   35217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:19:43.251784   35217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:19:43.423755   35217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:19:43.587328   35217 docker.go:233] disabling docker service ...
	I0803 23:19:43.587408   35217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:19:43.607879   35217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:19:43.623456   35217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:19:43.782388   35217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:19:43.943805   35217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:19:43.959333   35217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:19:43.978184   35217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:19:43.978245   35217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:43.989450   35217 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:19:43.989516   35217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:44.000640   35217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:44.012442   35217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:44.024747   35217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:19:44.036592   35217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:44.048277   35217 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:44.059338   35217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:44.070557   35217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:19:44.080940   35217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:19:44.091032   35217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:19:44.252131   35217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:19:44.561108   35217 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:19:44.561180   35217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:19:44.566344   35217 start.go:563] Will wait 60s for crictl version
	I0803 23:19:44.566397   35217 ssh_runner.go:195] Run: which crictl
	I0803 23:19:44.570363   35217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:19:44.616230   35217 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:19:44.616330   35217 ssh_runner.go:195] Run: crio --version
	I0803 23:19:44.646596   35217 ssh_runner.go:195] Run: crio --version
	I0803 23:19:44.680323   35217 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:19:44.681842   35217 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:19:44.684311   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:44.684678   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:44.684704   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:44.684953   35217 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:19:44.689974   35217 kubeadm.go:883] updating cluster {Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:19:44.690111   35217 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:19:44.690153   35217 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:19:44.734528   35217 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:19:44.734550   35217 crio.go:433] Images already preloaded, skipping extraction
	I0803 23:19:44.734599   35217 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:19:44.770239   35217 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:19:44.770261   35217 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:19:44.770269   35217 kubeadm.go:934] updating node { 192.168.39.154 8443 v1.30.3 crio true true} ...
	I0803 23:19:44.770359   35217 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:19:44.770423   35217 ssh_runner.go:195] Run: crio config
	I0803 23:19:44.822673   35217 cni.go:84] Creating CNI manager for ""
	I0803 23:19:44.822693   35217 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0803 23:19:44.822701   35217 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:19:44.822726   35217 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-076508 NodeName:ha-076508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:19:44.822854   35217 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-076508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:19:44.822871   35217 kube-vip.go:115] generating kube-vip config ...
	I0803 23:19:44.822909   35217 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:19:44.836452   35217 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:19:44.836554   35217 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:19:44.836606   35217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:19:44.846509   35217 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:19:44.846567   35217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0803 23:19:44.856915   35217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0803 23:19:44.874330   35217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:19:44.890840   35217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0803 23:19:44.907788   35217 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0803 23:19:44.924624   35217 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:19:44.929906   35217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:19:45.074540   35217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:19:45.091214   35217 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508 for IP: 192.168.39.154
	I0803 23:19:45.091237   35217 certs.go:194] generating shared ca certs ...
	I0803 23:19:45.091270   35217 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:19:45.091441   35217 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 23:19:45.091498   35217 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 23:19:45.091512   35217 certs.go:256] generating profile certs ...
	I0803 23:19:45.091639   35217 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key
	I0803 23:19:45.091677   35217 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.86072f96
	I0803 23:19:45.091698   35217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.86072f96 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.154 192.168.39.245 192.168.39.86 192.168.39.254]
	I0803 23:19:45.213772   35217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.86072f96 ...
	I0803 23:19:45.213812   35217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.86072f96: {Name:mk62f406486b5ed6ce4c1b2b0ee058997bac4493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:19:45.214021   35217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.86072f96 ...
	I0803 23:19:45.214039   35217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.86072f96: {Name:mk6f8077e49387fd70d50520c2c5ae7745e98a7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:19:45.214146   35217 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.86072f96 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt
	I0803 23:19:45.214318   35217 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.86072f96 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key
	I0803 23:19:45.214505   35217 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key
	I0803 23:19:45.214525   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:19:45.214547   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:19:45.214569   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:19:45.214592   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:19:45.214614   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:19:45.214635   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:19:45.214657   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:19:45.214678   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:19:45.214755   35217 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0803 23:19:45.214809   35217 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0803 23:19:45.214825   35217 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 23:19:45.214864   35217 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 23:19:45.214910   35217 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:19:45.214948   35217 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 23:19:45.215022   35217 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:19:45.215075   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0803 23:19:45.215094   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:19:45.215112   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0803 23:19:45.215645   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:19:45.241166   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:19:45.265456   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:19:45.290036   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 23:19:45.314751   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0803 23:19:45.339522   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 23:19:45.363709   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:19:45.387653   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:19:45.411283   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0803 23:19:45.435219   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:19:45.458731   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0803 23:19:45.481497   35217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:19:45.497479   35217 ssh_runner.go:195] Run: openssl version
	I0803 23:19:45.503285   35217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0803 23:19:45.514240   35217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0803 23:19:45.518552   35217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:19:45.518601   35217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0803 23:19:45.524198   35217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:19:45.533925   35217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:19:45.545219   35217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:19:45.549844   35217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:19:45.549896   35217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:19:45.564635   35217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:19:45.588592   35217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0803 23:19:45.601330   35217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0803 23:19:45.606146   35217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:19:45.606198   35217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0803 23:19:45.612037   35217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0803 23:19:45.621952   35217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:19:45.626570   35217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 23:19:45.632426   35217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 23:19:45.638599   35217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 23:19:45.644292   35217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 23:19:45.650254   35217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 23:19:45.656300   35217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 23:19:45.662436   35217 kubeadm.go:392] StartCluster: {Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:19:45.662537   35217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:19:45.662595   35217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:19:45.701555   35217 cri.go:89] found id: "ea03444f92a5bf19e0ce17d312a9b2d8deb5e3ebeebf8c505971ca71a0293a00"
	I0803 23:19:45.701583   35217 cri.go:89] found id: "4e95648c5dc3a1f775ddfde37200b1596d49698951ae67d76ed6284717775639"
	I0803 23:19:45.701589   35217 cri.go:89] found id: "f4dd33ac454b5f75ca4d107721f23681f28c83681628e2f103cc43c6ddc11a9c"
	I0803 23:19:45.701596   35217 cri.go:89] found id: "e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac"
	I0803 23:19:45.701599   35217 cri.go:89] found id: "06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff"
	I0803 23:19:45.701602   35217 cri.go:89] found id: "6f7c5e8e3bdac4eb3896e0799c1baf348b250f64611d70ada7c8a6b0877f753d"
	I0803 23:19:45.701604   35217 cri.go:89] found id: "992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18"
	I0803 23:19:45.701607   35217 cri.go:89] found id: "c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa"
	I0803 23:19:45.701609   35217 cri.go:89] found id: "d05a03627874a6aa94e9d20285c30c669224806570e94d22c65230790534d31e"
	I0803 23:19:45.701614   35217 cri.go:89] found id: "1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8"
	I0803 23:19:45.701616   35217 cri.go:89] found id: "94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf"
	I0803 23:19:45.701619   35217 cri.go:89] found id: "4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a"
	I0803 23:19:45.701621   35217 cri.go:89] found id: "f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e"
	I0803 23:19:45.701624   35217 cri.go:89] found id: ""
	I0803 23:19:45.701664   35217 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.051062821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722727350051037977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61cd55d6-86ff-4f4a-a92f-715fa1cd0995 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.051697659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ced6c8b-b49d-4f38-bfdd-d3ff9a113bd4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.051778532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ced6c8b-b49d-4f38-bfdd-d3ff9a113bd4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.052784083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:236598d6cc96dca69a60e6e32b5a453847fed9637c85dbd11fa1ba2bf7321383,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722727269817023811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b13bb71b80bf46047ef88f46757564800a7a2535aa5079fa9784ca4ac3429a,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722727227813873785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd4e4f16126465ca737c857024d1f2db0f944028f31bd5747553f31370754a5,PodSandboxId:3891dc3f4b2ecd6dd910ebe20063442b5ecaae292c6066aa28aa97f8624efadc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722727222509498628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92be7ea582c5789fc13f7d2186937a906a77c3c86199ade3884f8794dd934cbf,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722727221153849954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbe21946a24fb9d2eb30e783aea6aba40e68331502b615b798d46e971d967e5,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722727218814230179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f0576bac335d0d6fe7b02ce30db9a7eed82d914dc45483c8f4261404c0e118,PodSandboxId:4b7530438acbe8725d62763fcef37f254f2c22767d27370ccb3b4b15a8a44300,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722727204111974477,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f78d994ab8633ef1f7eaa15b0ba3802,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179245fb3446448ad44e0afb97b692facef742ab27ffbe071c8d1b5f9490cea4,PodSandboxId:43bff21da0840e8a0ef6378b793a0d658406db540b7b44bcbce29e52bbc0c830,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722727189710703926,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:459da68d9d106b172622a35c2e958b255d2dc9debadad23018344c60967166eb,PodSandboxId:58242e7c3ff38a06d03350b2a9d89ff7e2ea60d999927711dc88e52aa88cae94,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722727189160084660,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54737b2c
b99edc800dbe76ceaa7788270800e736f77d26559d10602bc8e849be,PodSandboxId:44ffc9bf5974ce395088947cafb2c0e05c29e0d8795e375f06e717cc7cc97b23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188913714794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f205a672c44c8b8b6269744861e2f619021ea9ec9865ab56cdbbccbfd542a5d,PodSandboxId:23179463e06f350d7c559077eb7b964a580949cec04f2da6e216478bc852d01e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722727188769686198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5dc55be20e5b1700b2ae2c3af514f1b13579ed1cf94c9a3730524eecec7f25,PodSandboxId:dfebfcd977434e1b8efa4f6bf3ec9fc83663f2536b7441bd22fa7e927c3a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188768783949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca821f617ef36553aee64a7fe7a7652c81fd47880f2cb64509d96d86aff8c39,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722727188680603436,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7a230c984fabf95ce556afbc95971e3553df1b1c36a0a64c2621a6690e94c5,PodSandboxId:aa2124fce0aededed80d6e619c0beb0e3107cbbc02944b90594947dee8ebf590,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722727188491753876,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd826
0f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ff16c76b0f978ffba85eed0176fd7cc1a61a7f8d1d5a66106d6c40a78bd2d,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722727188546599853,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Ann
otations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722726687649229521,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annot
ations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482042590388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kube
rnetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482019751290,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722726470071314319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722726464614760494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722726444757482817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722726444586088932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ced6c8b-b49d-4f38-bfdd-d3ff9a113bd4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.105467356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f96b0c61-e309-4dc3-a29d-ebe9f77e8e18 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.105571130Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f96b0c61-e309-4dc3-a29d-ebe9f77e8e18 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.106887692Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9ab9814-69f5-4ae2-b2b4-e41fe4a8ff59 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.107714934Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722727350107688592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9ab9814-69f5-4ae2-b2b4-e41fe4a8ff59 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.108146706Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66e2cf23-8fb2-4ed3-93d1-a585665cec53 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.108219905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66e2cf23-8fb2-4ed3-93d1-a585665cec53 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.108789731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:236598d6cc96dca69a60e6e32b5a453847fed9637c85dbd11fa1ba2bf7321383,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722727269817023811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b13bb71b80bf46047ef88f46757564800a7a2535aa5079fa9784ca4ac3429a,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722727227813873785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd4e4f16126465ca737c857024d1f2db0f944028f31bd5747553f31370754a5,PodSandboxId:3891dc3f4b2ecd6dd910ebe20063442b5ecaae292c6066aa28aa97f8624efadc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722727222509498628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92be7ea582c5789fc13f7d2186937a906a77c3c86199ade3884f8794dd934cbf,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722727221153849954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbe21946a24fb9d2eb30e783aea6aba40e68331502b615b798d46e971d967e5,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722727218814230179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f0576bac335d0d6fe7b02ce30db9a7eed82d914dc45483c8f4261404c0e118,PodSandboxId:4b7530438acbe8725d62763fcef37f254f2c22767d27370ccb3b4b15a8a44300,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722727204111974477,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f78d994ab8633ef1f7eaa15b0ba3802,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179245fb3446448ad44e0afb97b692facef742ab27ffbe071c8d1b5f9490cea4,PodSandboxId:43bff21da0840e8a0ef6378b793a0d658406db540b7b44bcbce29e52bbc0c830,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722727189710703926,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:459da68d9d106b172622a35c2e958b255d2dc9debadad23018344c60967166eb,PodSandboxId:58242e7c3ff38a06d03350b2a9d89ff7e2ea60d999927711dc88e52aa88cae94,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722727189160084660,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54737b2c
b99edc800dbe76ceaa7788270800e736f77d26559d10602bc8e849be,PodSandboxId:44ffc9bf5974ce395088947cafb2c0e05c29e0d8795e375f06e717cc7cc97b23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188913714794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f205a672c44c8b8b6269744861e2f619021ea9ec9865ab56cdbbccbfd542a5d,PodSandboxId:23179463e06f350d7c559077eb7b964a580949cec04f2da6e216478bc852d01e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722727188769686198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5dc55be20e5b1700b2ae2c3af514f1b13579ed1cf94c9a3730524eecec7f25,PodSandboxId:dfebfcd977434e1b8efa4f6bf3ec9fc83663f2536b7441bd22fa7e927c3a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188768783949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca821f617ef36553aee64a7fe7a7652c81fd47880f2cb64509d96d86aff8c39,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722727188680603436,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7a230c984fabf95ce556afbc95971e3553df1b1c36a0a64c2621a6690e94c5,PodSandboxId:aa2124fce0aededed80d6e619c0beb0e3107cbbc02944b90594947dee8ebf590,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722727188491753876,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd826
0f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ff16c76b0f978ffba85eed0176fd7cc1a61a7f8d1d5a66106d6c40a78bd2d,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722727188546599853,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Ann
otations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722726687649229521,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annot
ations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482042590388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kube
rnetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482019751290,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722726470071314319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722726464614760494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722726444757482817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722726444586088932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66e2cf23-8fb2-4ed3-93d1-a585665cec53 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.153227757Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3854a17-3b13-4077-bbe9-133bf5ff848c name=/runtime.v1.RuntimeService/Version
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.153353031Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3854a17-3b13-4077-bbe9-133bf5ff848c name=/runtime.v1.RuntimeService/Version
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.154560898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=233d662f-d309-4e53-a3e2-a1cce9f70e80 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.155015669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722727350154991490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=233d662f-d309-4e53-a3e2-a1cce9f70e80 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.155755219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=446988ba-3dd6-40f8-b5a2-733b07daf655 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.155837706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=446988ba-3dd6-40f8-b5a2-733b07daf655 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.156423979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:236598d6cc96dca69a60e6e32b5a453847fed9637c85dbd11fa1ba2bf7321383,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722727269817023811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b13bb71b80bf46047ef88f46757564800a7a2535aa5079fa9784ca4ac3429a,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722727227813873785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd4e4f16126465ca737c857024d1f2db0f944028f31bd5747553f31370754a5,PodSandboxId:3891dc3f4b2ecd6dd910ebe20063442b5ecaae292c6066aa28aa97f8624efadc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722727222509498628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92be7ea582c5789fc13f7d2186937a906a77c3c86199ade3884f8794dd934cbf,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722727221153849954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbe21946a24fb9d2eb30e783aea6aba40e68331502b615b798d46e971d967e5,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722727218814230179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f0576bac335d0d6fe7b02ce30db9a7eed82d914dc45483c8f4261404c0e118,PodSandboxId:4b7530438acbe8725d62763fcef37f254f2c22767d27370ccb3b4b15a8a44300,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722727204111974477,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f78d994ab8633ef1f7eaa15b0ba3802,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179245fb3446448ad44e0afb97b692facef742ab27ffbe071c8d1b5f9490cea4,PodSandboxId:43bff21da0840e8a0ef6378b793a0d658406db540b7b44bcbce29e52bbc0c830,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722727189710703926,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:459da68d9d106b172622a35c2e958b255d2dc9debadad23018344c60967166eb,PodSandboxId:58242e7c3ff38a06d03350b2a9d89ff7e2ea60d999927711dc88e52aa88cae94,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722727189160084660,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54737b2c
b99edc800dbe76ceaa7788270800e736f77d26559d10602bc8e849be,PodSandboxId:44ffc9bf5974ce395088947cafb2c0e05c29e0d8795e375f06e717cc7cc97b23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188913714794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f205a672c44c8b8b6269744861e2f619021ea9ec9865ab56cdbbccbfd542a5d,PodSandboxId:23179463e06f350d7c559077eb7b964a580949cec04f2da6e216478bc852d01e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722727188769686198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5dc55be20e5b1700b2ae2c3af514f1b13579ed1cf94c9a3730524eecec7f25,PodSandboxId:dfebfcd977434e1b8efa4f6bf3ec9fc83663f2536b7441bd22fa7e927c3a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188768783949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca821f617ef36553aee64a7fe7a7652c81fd47880f2cb64509d96d86aff8c39,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722727188680603436,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7a230c984fabf95ce556afbc95971e3553df1b1c36a0a64c2621a6690e94c5,PodSandboxId:aa2124fce0aededed80d6e619c0beb0e3107cbbc02944b90594947dee8ebf590,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722727188491753876,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd826
0f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ff16c76b0f978ffba85eed0176fd7cc1a61a7f8d1d5a66106d6c40a78bd2d,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722727188546599853,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Ann
otations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722726687649229521,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annot
ations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482042590388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kube
rnetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482019751290,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722726470071314319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722726464614760494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722726444757482817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722726444586088932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=446988ba-3dd6-40f8-b5a2-733b07daf655 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.204160840Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3116acbe-73be-4df6-a4b4-0e4c79faf977 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.204338575Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3116acbe-73be-4df6-a4b4-0e4c79faf977 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.205233306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80ac5d04-bffc-43dd-abc6-a3c530c02e45 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.205857446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722727350205829213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80ac5d04-bffc-43dd-abc6-a3c530c02e45 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.206544055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d762a388-3cc9-4e78-86f6-ff1c0f45b863 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.206648006Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d762a388-3cc9-4e78-86f6-ff1c0f45b863 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:22:30 ha-076508 crio[3764]: time="2024-08-03 23:22:30.207108776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:236598d6cc96dca69a60e6e32b5a453847fed9637c85dbd11fa1ba2bf7321383,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722727269817023811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b13bb71b80bf46047ef88f46757564800a7a2535aa5079fa9784ca4ac3429a,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722727227813873785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd4e4f16126465ca737c857024d1f2db0f944028f31bd5747553f31370754a5,PodSandboxId:3891dc3f4b2ecd6dd910ebe20063442b5ecaae292c6066aa28aa97f8624efadc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722727222509498628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92be7ea582c5789fc13f7d2186937a906a77c3c86199ade3884f8794dd934cbf,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722727221153849954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbe21946a24fb9d2eb30e783aea6aba40e68331502b615b798d46e971d967e5,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722727218814230179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f0576bac335d0d6fe7b02ce30db9a7eed82d914dc45483c8f4261404c0e118,PodSandboxId:4b7530438acbe8725d62763fcef37f254f2c22767d27370ccb3b4b15a8a44300,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722727204111974477,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f78d994ab8633ef1f7eaa15b0ba3802,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179245fb3446448ad44e0afb97b692facef742ab27ffbe071c8d1b5f9490cea4,PodSandboxId:43bff21da0840e8a0ef6378b793a0d658406db540b7b44bcbce29e52bbc0c830,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722727189710703926,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:459da68d9d106b172622a35c2e958b255d2dc9debadad23018344c60967166eb,PodSandboxId:58242e7c3ff38a06d03350b2a9d89ff7e2ea60d999927711dc88e52aa88cae94,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722727189160084660,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54737b2c
b99edc800dbe76ceaa7788270800e736f77d26559d10602bc8e849be,PodSandboxId:44ffc9bf5974ce395088947cafb2c0e05c29e0d8795e375f06e717cc7cc97b23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188913714794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f205a672c44c8b8b6269744861e2f619021ea9ec9865ab56cdbbccbfd542a5d,PodSandboxId:23179463e06f350d7c559077eb7b964a580949cec04f2da6e216478bc852d01e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722727188769686198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5dc55be20e5b1700b2ae2c3af514f1b13579ed1cf94c9a3730524eecec7f25,PodSandboxId:dfebfcd977434e1b8efa4f6bf3ec9fc83663f2536b7441bd22fa7e927c3a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188768783949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca821f617ef36553aee64a7fe7a7652c81fd47880f2cb64509d96d86aff8c39,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722727188680603436,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7a230c984fabf95ce556afbc95971e3553df1b1c36a0a64c2621a6690e94c5,PodSandboxId:aa2124fce0aededed80d6e619c0beb0e3107cbbc02944b90594947dee8ebf590,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722727188491753876,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd826
0f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ff16c76b0f978ffba85eed0176fd7cc1a61a7f8d1d5a66106d6c40a78bd2d,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722727188546599853,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Ann
otations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722726687649229521,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annot
ations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482042590388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kube
rnetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482019751290,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722726470071314319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722726464614760494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722726444757482817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722726444586088932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d762a388-3cc9-4e78-86f6-ff1c0f45b863 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	236598d6cc96d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   66a4a93f7c461       storage-provisioner
	47b13bb71b80b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Running             kube-apiserver            3                   f5c7d8fa11931       kube-apiserver-ha-076508
	2bd4e4f161264       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   3891dc3f4b2ec       busybox-fc5497c4f-9mswn
	92be7ea582c57       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Running             kube-controller-manager   2                   6d66708bd1ffa       kube-controller-manager-ha-076508
	4dbe21946a24f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   66a4a93f7c461       storage-provisioner
	71f0576bac335       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   4b7530438acbe       kube-vip-ha-076508
	179245fb34464       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   43bff21da0840       kube-proxy-jvj96
	459da68d9d106       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   58242e7c3ff38       kindnet-bpdht
	54737b2cb99ed       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   44ffc9bf5974c       coredns-7db6d8ff4d-g4nns
	2f205a672c44c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   23179463e06f3       kube-scheduler-ha-076508
	ee5dc55be20e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   dfebfcd977434       coredns-7db6d8ff4d-jm52b
	7ca821f617ef3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   6d66708bd1ffa       kube-controller-manager-ha-076508
	346ff16c76b0f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   f5c7d8fa11931       kube-apiserver-ha-076508
	7e7a230c984fa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   aa2124fce0aed       etcd-ha-076508
	bf2cd88f9d490       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   5999015810d66       busybox-fc5497c4f-9mswn
	e4d2591ba7d5b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   ce24a7aa66e68       coredns-7db6d8ff4d-g4nns
	06304cb4cc30c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   b802406e46b4c       coredns-7db6d8ff4d-jm52b
	992a3ac9b52e9       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    14 minutes ago       Exited              kindnet-cni               0                   f61ecf195fc7f       kindnet-bpdht
	c3100c43f706e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      14 minutes ago       Exited              kube-proxy                0                   9f02c76f5b54a       kube-proxy-jvj96
	94ea41effc5da       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      15 minutes ago       Exited              kube-scheduler            0                   893b2ee90e13f       kube-scheduler-ha-076508
	f127531f146d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago       Exited              etcd                      0                   bf23341fb90df       etcd-ha-076508
	
	
	==> coredns [06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff] <==
	[INFO] 10.244.1.2:49197 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125884s
	[INFO] 10.244.1.2:42019 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010825s
	[INFO] 10.244.1.2:36505 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000274487s
	[INFO] 10.244.0.4:53634 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092514s
	[INFO] 10.244.0.4:37869 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148859s
	[INFO] 10.244.0.4:34409 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007386s
	[INFO] 10.244.2.2:37127 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00023425s
	[INFO] 10.244.1.2:45090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198771s
	[INFO] 10.244.1.2:35116 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097607s
	[INFO] 10.244.0.4:54156 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000252361s
	[INFO] 10.244.0.4:56228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118127s
	[INFO] 10.244.2.2:40085 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113887s
	[INFO] 10.244.2.2:41147 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160253s
	[INFO] 10.244.1.2:34773 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000224176s
	[INFO] 10.244.1.2:41590 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094468s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1996&timeout=9m36s&timeoutSeconds=576&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1027044414]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:17:56.664) (total time: 13277ms):
	Trace[1027044414]: ---"Objects listed" error:Unauthorized 13274ms (23:18:09.939)
	Trace[1027044414]: [13.277685662s] [13.277685662s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [54737b2cb99edc800dbe76ceaa7788270800e736f77d26559d10602bc8e849be] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57922->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57914->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1242704582]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:20:00.696) (total time: 13205ms):
	Trace[1242704582]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57914->10.96.0.1:443: read: connection reset by peer 13205ms (23:20:13.902)
	Trace[1242704582]: [13.205590381s] [13.205590381s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57914->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac] <==
	[INFO] 10.244.0.4:47543 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000144888s
	[INFO] 10.244.2.2:48453 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203019s
	[INFO] 10.244.2.2:47323 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155497s
	[INFO] 10.244.1.2:55651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193064s
	[INFO] 10.244.1.2:54565 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106172s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=2019&timeout=8m23s&timeoutSeconds=503&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=2001&timeout=6m8s&timeoutSeconds=368&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=2019&timeout=5m38s&timeoutSeconds=338&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1340984613]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:17:56.425) (total time: 13513ms):
	Trace[1340984613]: ---"Objects listed" error:Unauthorized 13513ms (23:18:09.938)
	Trace[1340984613]: [13.513458585s] [13.513458585s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1192520348]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:17:56.493) (total time: 13445ms):
	Trace[1192520348]: ---"Objects listed" error:Unauthorized 13445ms (23:18:09.938)
	Trace[1192520348]: [13.445317861s] [13.445317861s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[2110706742]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:17:56.396) (total time: 13545ms):
	Trace[2110706742]: ---"Objects listed" error:Unauthorized 13544ms (23:18:09.940)
	Trace[2110706742]: [13.545030185s] [13.545030185s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ee5dc55be20e5b1700b2ae2c3af514f1b13579ed1cf94c9a3730524eecec7f25] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[4793203]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:19:53.823) (total time: 10001ms):
	Trace[4793203]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (23:20:03.825)
	Trace[4793203]: [10.001950976s] [10.001950976s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:43588->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:43588->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-076508
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_07_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:22:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:20:32 +0000   Sat, 03 Aug 2024 23:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:20:32 +0000   Sat, 03 Aug 2024 23:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:20:32 +0000   Sat, 03 Aug 2024 23:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:20:32 +0000   Sat, 03 Aug 2024 23:08:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.154
	  Hostname:    ha-076508
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f520408175b740ceb19f810f6b0739d9
	  System UUID:                f5204081-75b7-40ce-b19f-810f6b0739d9
	  Boot ID:                    1b5fc419-04f3-4085-a948-6aee54d39a0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9mswn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-g4nns             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-jm52b             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-076508                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-bpdht                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-076508             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-076508    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-jvj96                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-076508             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-076508                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age              From             Message
	  ----     ------                   ----             ----             -------
	  Normal   Starting                 14m              kube-proxy       
	  Normal   Starting                 118s             kube-proxy       
	  Normal   NodeHasNoDiskPressure    15m              kubelet          Node ha-076508 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m              kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m              kubelet          Node ha-076508 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m              kubelet          Node ha-076508 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m              node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal   NodeReady                14m              kubelet          Node ha-076508 status is now: NodeReady
	  Normal   RegisteredNode           12m              node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal   RegisteredNode           11m              node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Warning  ContainerGCFailed        3m (x2 over 4m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           110s             node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal   RegisteredNode           108s             node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal   RegisteredNode           27s              node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	
	
	Name:               ha-076508-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_09_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:09:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:22:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:21:15 +0000   Sat, 03 Aug 2024 23:20:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:21:15 +0000   Sat, 03 Aug 2024 23:20:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:21:15 +0000   Sat, 03 Aug 2024 23:20:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:21:15 +0000   Sat, 03 Aug 2024 23:20:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-076508-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e37b92099f364fcfb7894de373a13dc0
	  System UUID:                e37b9209-9f36-4fcf-b789-4de373a13dc0
	  Boot ID:                    81900771-be6c-4a7e-92b3-1dcdfcd12a0e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wlr2g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-076508-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-kw254                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-076508-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-076508-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-hkfgl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-076508-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-076508-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-076508-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-076508-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-076508-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  NodeNotReady             9m13s                  node-controller  Node ha-076508-m02 status is now: NodeNotReady
	  Normal  Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m23s (x8 over 2m24s)  kubelet          Node ha-076508-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s (x8 over 2m24s)  kubelet          Node ha-076508-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s (x7 over 2m24s)  kubelet          Node ha-076508-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           110s                   node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  RegisteredNode           108s                   node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  RegisteredNode           27s                    node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	
	
	Name:               ha-076508-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_10_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:10:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:22:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:22:06 +0000   Sat, 03 Aug 2024 23:21:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:22:06 +0000   Sat, 03 Aug 2024 23:21:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:22:06 +0000   Sat, 03 Aug 2024 23:21:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:22:06 +0000   Sat, 03 Aug 2024 23:21:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.86
	  Hostname:    ha-076508-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad0c4ebbd959429f966b637eb26caf62
	  System UUID:                ad0c4ebb-d959-429f-966b-637eb26caf62
	  Boot ID:                    7de73c58-7ff3-446c-acd0-f58f88ef314b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nfwfw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-076508-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-tzzq4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-076508-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-076508-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-7kmfh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-076508-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-076508-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 37s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-076508-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-076508-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-076508-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-076508-m03 event: Registered Node ha-076508-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-076508-m03 event: Registered Node ha-076508-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-076508-m03 event: Registered Node ha-076508-m03 in Controller
	  Normal   RegisteredNode           110s               node-controller  Node ha-076508-m03 event: Registered Node ha-076508-m03 in Controller
	  Normal   RegisteredNode           108s               node-controller  Node ha-076508-m03 event: Registered Node ha-076508-m03 in Controller
	  Normal   NodeNotReady             70s                node-controller  Node ha-076508-m03 status is now: NodeNotReady
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  54s (x2 over 54s)  kubelet          Node ha-076508-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    54s (x2 over 54s)  kubelet          Node ha-076508-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s (x2 over 54s)  kubelet          Node ha-076508-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 54s                kubelet          Node ha-076508-m03 has been rebooted, boot id: 7de73c58-7ff3-446c-acd0-f58f88ef314b
	  Normal   NodeReady                54s                kubelet          Node ha-076508-m03 status is now: NodeReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-076508-m03 event: Registered Node ha-076508-m03 in Controller
	
	
	Name:               ha-076508-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_12_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:12:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:22:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:22:22 +0000   Sat, 03 Aug 2024 23:22:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:22:22 +0000   Sat, 03 Aug 2024 23:22:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:22:22 +0000   Sat, 03 Aug 2024 23:22:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:22:22 +0000   Sat, 03 Aug 2024 23:22:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-076508-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 59e0fe8296564277a8f997ffad0b72b7
	  System UUID:                59e0fe82-9656-4277-a8f9-97ffad0b72b7
	  Boot ID:                    ac1fd3ea-7219-4bf5-b0e7-785a8c9a8071
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hdkw5       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-ff944    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-076508-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-076508-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-076508-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-076508-m04 status is now: NodeReady
	  Normal   RegisteredNode           110s               node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal   RegisteredNode           108s               node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal   NodeNotReady             70s                node-controller  Node ha-076508-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-076508-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-076508-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-076508-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-076508-m04 has been rebooted, boot id: ac1fd3ea-7219-4bf5-b0e7-785a8c9a8071
	  Normal   NodeReady                8s                 kubelet          Node ha-076508-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.547215] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.057969] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056174] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.182365] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.110609] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.279600] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.413542] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.061522] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.061905] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +1.335796] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.036158] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.075573] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.924842] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.636926] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 3 23:09] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 3 23:19] systemd-fstab-generator[3682]: Ignoring "noauto" option for root device
	[  +0.164245] systemd-fstab-generator[3694]: Ignoring "noauto" option for root device
	[  +0.196396] systemd-fstab-generator[3708]: Ignoring "noauto" option for root device
	[  +0.157714] systemd-fstab-generator[3720]: Ignoring "noauto" option for root device
	[  +0.306575] systemd-fstab-generator[3748]: Ignoring "noauto" option for root device
	[  +0.825945] systemd-fstab-generator[3850]: Ignoring "noauto" option for root device
	[  +3.462450] kauditd_printk_skb: 130 callbacks suppressed
	[Aug 3 23:20] kauditd_printk_skb: 78 callbacks suppressed
	[ +23.809708] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [7e7a230c984fabf95ce556afbc95971e3553df1b1c36a0a64c2621a6690e94c5] <==
	{"level":"warn","ts":"2024-08-03T23:21:30.624454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:21:30.626056Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:21:30.659575Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"10fb7b0a157fc334","from":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:21:34.290934Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.86:2380/version","remote-member-id":"6c6e355cb97cea1a","error":"Get \"https://192.168.39.86:2380/version\": dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-03T23:21:34.291051Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c6e355cb97cea1a","error":"Get \"https://192.168.39.86:2380/version\": dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-03T23:21:34.803924Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c6e355cb97cea1a","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-03T23:21:34.804028Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c6e355cb97cea1a","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-03T23:21:38.294085Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.86:2380/version","remote-member-id":"6c6e355cb97cea1a","error":"Get \"https://192.168.39.86:2380/version\": dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-03T23:21:38.294454Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c6e355cb97cea1a","error":"Get \"https://192.168.39.86:2380/version\": dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-03T23:21:39.804065Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c6e355cb97cea1a","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-03T23:21:39.804174Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c6e355cb97cea1a","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-03T23:21:42.297001Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.86:2380/version","remote-member-id":"6c6e355cb97cea1a","error":"Get \"https://192.168.39.86:2380/version\": dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-03T23:21:42.297164Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6c6e355cb97cea1a","error":"Get \"https://192.168.39.86:2380/version\": dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-03T23:21:44.415736Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.912212ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14066027079368719785 > lease_revoke:<id:2179911a8b4e0881>","response":"size:28"}
	{"level":"warn","ts":"2024-08-03T23:21:44.804443Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6c6e355cb97cea1a","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-03T23:21:44.804554Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c6e355cb97cea1a","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-03T23:21:45.956573Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:21:45.956645Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:21:45.956792Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:21:45.971918Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"10fb7b0a157fc334","to":"6c6e355cb97cea1a","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-03T23:21:45.97197Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:21:45.977904Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"10fb7b0a157fc334","to":"6c6e355cb97cea1a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-03T23:21:45.977968Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"warn","ts":"2024-08-03T23:21:49.805348Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c6e355cb97cea1a","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-03T23:21:51.678483Z","caller":"traceutil/trace.go:171","msg":"trace[876846100] transaction","detail":"{read_only:false; response_revision:2552; number_of_response:1; }","duration":"138.46573ms","start":"2024-08-03T23:21:51.539972Z","end":"2024-08-03T23:21:51.678438Z","steps":["trace[876846100] 'process raft request'  (duration: 138.148425ms)"],"step_count":1}
	
	
	==> etcd [f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e] <==
	{"level":"warn","ts":"2024-08-03T23:18:11.919609Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-03T23:18:11.538207Z","time spent":"381.388563ms","remote":"127.0.0.1:48170","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:500 "}
	2024/08/03 23:18:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-03T23:18:11.923948Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14066027079178105826,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-03T23:18:12.18516Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3cc90d899860a179","rtt":"1.25816ms","error":"dial tcp 192.168.39.245:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-03T23:18:12.187502Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-03T23:18:12.187547Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-03T23:18:12.189055Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"10fb7b0a157fc334","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-03T23:18:12.189252Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3cc90d899860a179"}
	{"level":"warn","ts":"2024-08-03T23:18:12.195325Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3cc90d899860a179","rtt":"12.208499ms","error":"dial tcp 192.168.39.245:2380: connect: no route to host"}
	{"level":"info","ts":"2024-08-03T23:18:12.196381Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3cc90d899860a179"}
	{"level":"info","ts":"2024-08-03T23:18:12.196449Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3cc90d899860a179"}
	{"level":"info","ts":"2024-08-03T23:18:12.196544Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179"}
	{"level":"info","ts":"2024-08-03T23:18:12.196601Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179"}
	{"level":"info","ts":"2024-08-03T23:18:12.196635Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179"}
	{"level":"info","ts":"2024-08-03T23:18:12.196644Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3cc90d899860a179"}
	{"level":"info","ts":"2024-08-03T23:18:12.19665Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.196682Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.196721Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.196819Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.196895Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.196962Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.196995Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.200633Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-08-03T23:18:12.200756Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-08-03T23:18:12.200783Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-076508","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"]}
	
	
	==> kernel <==
	 23:22:30 up 15 min,  0 users,  load average: 1.43, 0.77, 0.39
	Linux ha-076508 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [459da68d9d106b172622a35c2e958b255d2dc9debadad23018344c60967166eb] <==
	I0803 23:22:00.298089       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:22:10.288999       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:22:10.289057       1 main.go:299] handling current node
	I0803 23:22:10.289092       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:22:10.289100       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:22:10.289379       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:22:10.289434       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:22:10.289522       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:22:10.289552       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:22:20.296856       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:22:20.296934       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:22:20.297115       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:22:20.297123       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:22:20.297214       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:22:20.297238       1 main.go:299] handling current node
	I0803 23:22:20.297258       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:22:20.297264       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:22:30.296365       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:22:30.296410       1 main.go:299] handling current node
	I0803 23:22:30.296423       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:22:30.296441       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:22:30.296565       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:22:30.296571       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:22:30.296612       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:22:30.296643       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18] <==
	I0803 23:17:51.270446       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:17:51.270714       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:17:51.272182       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:17:51.272234       1 main.go:299] handling current node
	I0803 23:17:51.272266       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:17:51.272325       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:17:51.272416       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:17:51.272440       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:18:01.271130       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:18:01.271403       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:18:01.271683       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:18:01.271716       1 main.go:299] handling current node
	I0803 23:18:01.271756       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:18:01.271773       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:18:01.271835       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:18:01.271854       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	E0803 23:18:09.942061       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	I0803 23:18:11.270506       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:18:11.270563       1 main.go:299] handling current node
	I0803 23:18:11.270579       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:18:11.270588       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:18:11.270735       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:18:11.270763       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:18:11.270820       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:18:11.270845       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [346ff16c76b0f978ffba85eed0176fd7cc1a61a7f8d1d5a66106d6c40a78bd2d] <==
	I0803 23:19:49.232935       1 options.go:221] external host was not specified, using 192.168.39.154
	I0803 23:19:49.234418       1 server.go:148] Version: v1.30.3
	I0803 23:19:49.234543       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:19:50.054345       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0803 23:19:50.056966       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0803 23:19:50.062456       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0803 23:19:50.062565       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0803 23:19:50.063659       1 instance.go:299] Using reconciler: lease
	W0803 23:20:10.051982       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0803 23:20:10.053073       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0803 23:20:10.065245       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	W0803 23:20:10.066083       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-apiserver [47b13bb71b80bf46047ef88f46757564800a7a2535aa5079fa9784ca4ac3429a] <==
	I0803 23:20:29.962437       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0803 23:20:29.963268       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0803 23:20:29.963420       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0803 23:20:30.020849       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0803 23:20:30.020887       1 policy_source.go:224] refreshing policies
	I0803 23:20:30.040355       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0803 23:20:30.045050       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0803 23:20:30.051751       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0803 23:20:30.051927       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0803 23:20:30.052239       1 shared_informer.go:320] Caches are synced for configmaps
	I0803 23:20:30.052395       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0803 23:20:30.053216       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0803 23:20:30.053338       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0803 23:20:30.063890       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0803 23:20:30.064014       1 aggregator.go:165] initial CRD sync complete...
	I0803 23:20:30.064057       1 autoregister_controller.go:141] Starting autoregister controller
	I0803 23:20:30.064065       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0803 23:20:30.064072       1 cache.go:39] Caches are synced for autoregister controller
	W0803 23:20:30.067253       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.245]
	I0803 23:20:30.069185       1 controller.go:615] quota admission added evaluator for: endpoints
	I0803 23:20:30.093935       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0803 23:20:30.098111       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0803 23:20:30.099742       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0803 23:20:30.953728       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0803 23:20:31.424449       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.154 192.168.39.245]
	
	
	==> kube-controller-manager [7ca821f617ef36553aee64a7fe7a7652c81fd47880f2cb64509d96d86aff8c39] <==
	I0803 23:19:50.496661       1 serving.go:380] Generated self-signed cert in-memory
	I0803 23:19:50.759536       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0803 23:19:50.759579       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:19:50.761142       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0803 23:19:50.761862       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0803 23:19:50.762001       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0803 23:19:50.762094       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0803 23:20:11.072022       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.154:8443/healthz\": dial tcp 192.168.39.154:8443: connect: connection refused"
	
	
	==> kube-controller-manager [92be7ea582c5789fc13f7d2186937a906a77c3c86199ade3884f8794dd934cbf] <==
	I0803 23:20:42.582027       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0803 23:20:42.594734       1 shared_informer.go:320] Caches are synced for persistent volume
	I0803 23:20:42.597952       1 shared_informer.go:320] Caches are synced for attach detach
	I0803 23:20:42.615642       1 shared_informer.go:320] Caches are synced for PV protection
	I0803 23:20:42.676470       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0803 23:20:42.689320       1 shared_informer.go:320] Caches are synced for endpoint
	I0803 23:20:42.730663       1 shared_informer.go:320] Caches are synced for resource quota
	I0803 23:20:42.746585       1 shared_informer.go:320] Caches are synced for resource quota
	I0803 23:20:43.155820       1 shared_informer.go:320] Caches are synced for garbage collector
	I0803 23:20:43.155943       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0803 23:20:43.177134       1 shared_informer.go:320] Caches are synced for garbage collector
	I0803 23:20:51.812922       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-79nmw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-79nmw\": the object has been modified; please apply your changes to the latest version and try again"
	I0803 23:20:51.814474       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"64336eba-dc9f-4608-9026-92a954c040e5", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-79nmw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-79nmw": the object has been modified; please apply your changes to the latest version and try again
	I0803 23:20:51.837810       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.164495ms"
	I0803 23:20:51.838266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="195.715µs"
	I0803 23:21:01.797077       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-79nmw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-79nmw\": the object has been modified; please apply your changes to the latest version and try again"
	I0803 23:21:01.797998       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"64336eba-dc9f-4608-9026-92a954c040e5", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-79nmw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-79nmw": the object has been modified; please apply your changes to the latest version and try again
	I0803 23:21:01.817776       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.529014ms"
	I0803 23:21:01.817894       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.333µs"
	I0803 23:21:20.719856       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.300129ms"
	I0803 23:21:20.720066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.413µs"
	I0803 23:21:36.979028       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.548µs"
	I0803 23:21:56.245387       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.780907ms"
	I0803 23:21:56.245627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.82µs"
	I0803 23:22:22.221026       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076508-m04"
	
	
	==> kube-proxy [179245fb3446448ad44e0afb97b692facef742ab27ffbe071c8d1b5f9490cea4] <==
	I0803 23:19:50.828334       1 server_linux.go:69] "Using iptables proxy"
	E0803 23:19:51.694913       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0803 23:19:54.765994       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0803 23:19:57.839246       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0803 23:20:03.981894       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0803 23:20:13.199142       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0803 23:20:31.629710       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0803 23:20:31.630014       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0803 23:20:31.707586       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 23:20:31.707796       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 23:20:31.707838       1 server_linux.go:165] "Using iptables Proxier"
	I0803 23:20:31.712996       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 23:20:31.713768       1 server.go:872] "Version info" version="v1.30.3"
	I0803 23:20:31.714103       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:20:31.718215       1 config.go:192] "Starting service config controller"
	I0803 23:20:31.718393       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 23:20:31.718495       1 config.go:101] "Starting endpoint slice config controller"
	I0803 23:20:31.718518       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 23:20:31.720528       1 config.go:319] "Starting node config controller"
	I0803 23:20:31.720635       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 23:20:31.820437       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 23:20:31.820517       1 shared_informer.go:320] Caches are synced for service config
	I0803 23:20:31.828381       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa] <==
	E0803 23:16:56.333682       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:16:56.333937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:16:56.334017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:16:56.334504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:16:56.334600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:02.733698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:02.734194       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:02.733946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:02.734412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:02.734017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:02.734649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:11.951656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:11.951877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:15.022782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:15.022974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:15.023085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:15.023120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:30.383237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:30.383423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:36.525911       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:36.525986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:39.598436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:39.598501       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:18:04.174801       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:18:04.175717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [2f205a672c44c8b8b6269744861e2f619021ea9ec9865ab56cdbbccbfd542a5d] <==
	W0803 23:20:25.723325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.154:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	E0803 23:20:25.723474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.154:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	W0803 23:20:26.269714       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.154:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	E0803 23:20:26.269851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.154:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	W0803 23:20:26.447641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.154:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	E0803 23:20:26.447825       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.154:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	W0803 23:20:26.758504       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.154:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	E0803 23:20:26.758618       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.154:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	W0803 23:20:26.928266       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.154:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	E0803 23:20:26.928423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.154:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	W0803 23:20:27.355087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.154:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	E0803 23:20:27.355247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.154:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	W0803 23:20:30.015729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0803 23:20:30.015963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0803 23:20:30.016157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0803 23:20:30.016196       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0803 23:20:30.016327       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 23:20:30.016381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0803 23:20:30.016477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:20:30.016508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0803 23:20:30.016591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 23:20:30.016657       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 23:20:30.016784       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0803 23:20:30.016817       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0803 23:20:46.881494       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf] <==
	W0803 23:18:04.503185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 23:18:04.503379       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0803 23:18:04.548847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0803 23:18:04.548968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0803 23:18:05.030395       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:18:05.030489       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0803 23:18:05.037811       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 23:18:05.037861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 23:18:05.041103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0803 23:18:05.041193       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0803 23:18:05.194557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0803 23:18:05.194610       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0803 23:18:05.399732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0803 23:18:05.399821       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0803 23:18:05.488214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0803 23:18:05.488307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0803 23:18:05.863523       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 23:18:05.863645       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0803 23:18:10.280529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 23:18:10.280559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0803 23:18:11.289196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 23:18:11.289373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0803 23:18:11.763728       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:18:11.763757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:18:11.889862       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 03 23:20:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:20:30 ha-076508 kubelet[1368]: I0803 23:20:30.893887    1368 scope.go:117] "RemoveContainer" containerID="f4dd33ac454b5f75ca4d107721f23681f28c83681628e2f103cc43c6ddc11a9c"
	Aug 03 23:20:31 ha-076508 kubelet[1368]: W0803 23:20:31.629683    1368 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=1947": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 03 23:20:31 ha-076508 kubelet[1368]: E0803 23:20:31.629780    1368 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=1947": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 03 23:20:31 ha-076508 kubelet[1368]: I0803 23:20:31.629859    1368 status_manager.go:853] "Failed to get status for pod" podUID="a8200b39f80bd8260f39151e31b90485" pod="kube-system/etcd-ha-076508" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 03 23:20:31 ha-076508 kubelet[1368]: I0803 23:20:31.798524    1368 scope.go:117] "RemoveContainer" containerID="4dbe21946a24fb9d2eb30e783aea6aba40e68331502b615b798d46e971d967e5"
	Aug 03 23:20:31 ha-076508 kubelet[1368]: E0803 23:20:31.798730    1368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c98f9062-eff5-48e1-b260-7e8acf8df124)\"" pod="kube-system/storage-provisioner" podUID="c98f9062-eff5-48e1-b260-7e8acf8df124"
	Aug 03 23:20:43 ha-076508 kubelet[1368]: I0803 23:20:43.798800    1368 scope.go:117] "RemoveContainer" containerID="4dbe21946a24fb9d2eb30e783aea6aba40e68331502b615b798d46e971d967e5"
	Aug 03 23:20:43 ha-076508 kubelet[1368]: E0803 23:20:43.800071    1368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c98f9062-eff5-48e1-b260-7e8acf8df124)\"" pod="kube-system/storage-provisioner" podUID="c98f9062-eff5-48e1-b260-7e8acf8df124"
	Aug 03 23:20:45 ha-076508 kubelet[1368]: I0803 23:20:45.729028    1368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-9mswn" podStartSLOduration=559.441930158 podStartE2EDuration="9m21.728989618s" podCreationTimestamp="2024-08-03 23:11:24 +0000 UTC" firstStartedPulling="2024-08-03 23:11:25.333008464 +0000 UTC m=+234.665515193" lastFinishedPulling="2024-08-03 23:11:27.620067928 +0000 UTC m=+236.952574653" observedRunningTime="2024-08-03 23:11:27.806991277 +0000 UTC m=+237.139498024" watchObservedRunningTime="2024-08-03 23:20:45.728989618 +0000 UTC m=+795.061496350"
	Aug 03 23:20:57 ha-076508 kubelet[1368]: I0803 23:20:57.798776    1368 scope.go:117] "RemoveContainer" containerID="4dbe21946a24fb9d2eb30e783aea6aba40e68331502b615b798d46e971d967e5"
	Aug 03 23:20:57 ha-076508 kubelet[1368]: E0803 23:20:57.799042    1368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c98f9062-eff5-48e1-b260-7e8acf8df124)\"" pod="kube-system/storage-provisioner" podUID="c98f9062-eff5-48e1-b260-7e8acf8df124"
	Aug 03 23:21:09 ha-076508 kubelet[1368]: I0803 23:21:09.799402    1368 scope.go:117] "RemoveContainer" containerID="4dbe21946a24fb9d2eb30e783aea6aba40e68331502b615b798d46e971d967e5"
	Aug 03 23:21:21 ha-076508 kubelet[1368]: I0803 23:21:21.798488    1368 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-076508" podUID="f0640d14-d8df-4fe5-8265-4f1215c2e881"
	Aug 03 23:21:21 ha-076508 kubelet[1368]: I0803 23:21:21.818116    1368 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-076508"
	Aug 03 23:21:30 ha-076508 kubelet[1368]: E0803 23:21:30.839477    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:21:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:21:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:21:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:21:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:22:30 ha-076508 kubelet[1368]: E0803 23:22:30.842597    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:22:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:22:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:22:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:22:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 23:22:29.705059   36650 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19364-9607/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-076508 -n ha-076508
helpers_test.go:261: (dbg) Run:  kubectl --context ha-076508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (383.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 stop -v=7 --alsologtostderr
E0803 23:23:27.616380   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076508 stop -v=7 --alsologtostderr: exit status 82 (2m0.467142616s)

                                                
                                                
-- stdout --
	* Stopping node "ha-076508-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:22:49.526815   37063 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:22:49.526925   37063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:22:49.526935   37063 out.go:304] Setting ErrFile to fd 2...
	I0803 23:22:49.526939   37063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:22:49.527127   37063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:22:49.527387   37063 out.go:298] Setting JSON to false
	I0803 23:22:49.527457   37063 mustload.go:65] Loading cluster: ha-076508
	I0803 23:22:49.527830   37063 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:22:49.527913   37063 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:22:49.528088   37063 mustload.go:65] Loading cluster: ha-076508
	I0803 23:22:49.528215   37063 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:22:49.528247   37063 stop.go:39] StopHost: ha-076508-m04
	I0803 23:22:49.528719   37063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:22:49.528765   37063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:22:49.543481   37063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42627
	I0803 23:22:49.543983   37063 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:22:49.544556   37063 main.go:141] libmachine: Using API Version  1
	I0803 23:22:49.544607   37063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:22:49.544991   37063 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:22:49.547441   37063 out.go:177] * Stopping node "ha-076508-m04"  ...
	I0803 23:22:49.549064   37063 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0803 23:22:49.549099   37063 main.go:141] libmachine: (ha-076508-m04) Calling .DriverName
	I0803 23:22:49.549298   37063 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0803 23:22:49.549323   37063 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHHostname
	I0803 23:22:49.552134   37063 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:22:49.552552   37063 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:22:16 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:22:49.552583   37063 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:22:49.552763   37063 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHPort
	I0803 23:22:49.552946   37063 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHKeyPath
	I0803 23:22:49.553104   37063 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHUsername
	I0803 23:22:49.553239   37063 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m04/id_rsa Username:docker}
	I0803 23:22:49.641407   37063 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0803 23:22:49.695113   37063 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0803 23:22:49.747642   37063 main.go:141] libmachine: Stopping "ha-076508-m04"...
	I0803 23:22:49.747665   37063 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:22:49.749160   37063 main.go:141] libmachine: (ha-076508-m04) Calling .Stop
	I0803 23:22:49.752617   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 0/120
	I0803 23:22:50.754082   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 1/120
	I0803 23:22:51.755517   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 2/120
	I0803 23:22:52.757169   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 3/120
	I0803 23:22:53.758823   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 4/120
	I0803 23:22:54.760446   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 5/120
	I0803 23:22:55.761887   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 6/120
	I0803 23:22:56.763907   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 7/120
	I0803 23:22:57.765805   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 8/120
	I0803 23:22:58.768025   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 9/120
	I0803 23:22:59.769400   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 10/120
	I0803 23:23:00.770844   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 11/120
	I0803 23:23:01.772742   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 12/120
	I0803 23:23:02.774114   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 13/120
	I0803 23:23:03.775438   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 14/120
	I0803 23:23:04.777620   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 15/120
	I0803 23:23:05.779146   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 16/120
	I0803 23:23:06.780773   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 17/120
	I0803 23:23:07.782019   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 18/120
	I0803 23:23:08.783365   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 19/120
	I0803 23:23:09.785469   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 20/120
	I0803 23:23:10.787823   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 21/120
	I0803 23:23:11.789222   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 22/120
	I0803 23:23:12.790516   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 23/120
	I0803 23:23:13.792561   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 24/120
	I0803 23:23:14.794514   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 25/120
	I0803 23:23:15.796065   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 26/120
	I0803 23:23:16.797956   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 27/120
	I0803 23:23:17.799942   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 28/120
	I0803 23:23:18.801133   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 29/120
	I0803 23:23:19.802871   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 30/120
	I0803 23:23:20.804195   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 31/120
	I0803 23:23:21.805694   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 32/120
	I0803 23:23:22.807804   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 33/120
	I0803 23:23:23.809334   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 34/120
	I0803 23:23:24.811298   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 35/120
	I0803 23:23:25.812598   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 36/120
	I0803 23:23:26.814390   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 37/120
	I0803 23:23:27.815932   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 38/120
	I0803 23:23:28.817302   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 39/120
	I0803 23:23:29.819442   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 40/120
	I0803 23:23:30.820709   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 41/120
	I0803 23:23:31.821931   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 42/120
	I0803 23:23:32.823263   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 43/120
	I0803 23:23:33.824677   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 44/120
	I0803 23:23:34.826035   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 45/120
	I0803 23:23:35.828461   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 46/120
	I0803 23:23:36.829843   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 47/120
	I0803 23:23:37.831763   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 48/120
	I0803 23:23:38.833382   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 49/120
	I0803 23:23:39.835070   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 50/120
	I0803 23:23:40.836809   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 51/120
	I0803 23:23:41.838053   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 52/120
	I0803 23:23:42.840061   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 53/120
	I0803 23:23:43.841306   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 54/120
	I0803 23:23:44.842797   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 55/120
	I0803 23:23:45.844429   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 56/120
	I0803 23:23:46.845730   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 57/120
	I0803 23:23:47.848024   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 58/120
	I0803 23:23:48.849504   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 59/120
	I0803 23:23:49.851536   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 60/120
	I0803 23:23:50.852888   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 61/120
	I0803 23:23:51.854414   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 62/120
	I0803 23:23:52.856611   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 63/120
	I0803 23:23:53.858120   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 64/120
	I0803 23:23:54.859889   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 65/120
	I0803 23:23:55.861286   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 66/120
	I0803 23:23:56.862688   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 67/120
	I0803 23:23:57.864025   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 68/120
	I0803 23:23:58.865300   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 69/120
	I0803 23:23:59.867406   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 70/120
	I0803 23:24:00.868895   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 71/120
	I0803 23:24:01.870824   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 72/120
	I0803 23:24:02.872065   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 73/120
	I0803 23:24:03.873597   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 74/120
	I0803 23:24:04.875150   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 75/120
	I0803 23:24:05.876426   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 76/120
	I0803 23:24:06.877923   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 77/120
	I0803 23:24:07.879819   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 78/120
	I0803 23:24:08.881550   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 79/120
	I0803 23:24:09.883355   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 80/120
	I0803 23:24:10.884599   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 81/120
	I0803 23:24:11.886096   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 82/120
	I0803 23:24:12.887383   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 83/120
	I0803 23:24:13.888675   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 84/120
	I0803 23:24:14.890492   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 85/120
	I0803 23:24:15.892354   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 86/120
	I0803 23:24:16.894085   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 87/120
	I0803 23:24:17.895821   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 88/120
	I0803 23:24:18.897070   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 89/120
	I0803 23:24:19.899175   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 90/120
	I0803 23:24:20.900516   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 91/120
	I0803 23:24:21.901905   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 92/120
	I0803 23:24:22.903890   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 93/120
	I0803 23:24:23.905388   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 94/120
	I0803 23:24:24.907250   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 95/120
	I0803 23:24:25.908782   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 96/120
	I0803 23:24:26.910789   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 97/120
	I0803 23:24:27.912020   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 98/120
	I0803 23:24:28.913343   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 99/120
	I0803 23:24:29.915504   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 100/120
	I0803 23:24:30.916645   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 101/120
	I0803 23:24:31.918023   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 102/120
	I0803 23:24:32.919396   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 103/120
	I0803 23:24:33.920583   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 104/120
	I0803 23:24:34.922408   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 105/120
	I0803 23:24:35.923594   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 106/120
	I0803 23:24:36.925095   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 107/120
	I0803 23:24:37.926633   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 108/120
	I0803 23:24:38.928563   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 109/120
	I0803 23:24:39.930512   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 110/120
	I0803 23:24:40.932451   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 111/120
	I0803 23:24:41.934570   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 112/120
	I0803 23:24:42.936046   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 113/120
	I0803 23:24:43.937326   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 114/120
	I0803 23:24:44.939320   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 115/120
	I0803 23:24:45.940575   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 116/120
	I0803 23:24:46.942001   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 117/120
	I0803 23:24:47.943252   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 118/120
	I0803 23:24:48.944599   37063 main.go:141] libmachine: (ha-076508-m04) Waiting for machine to stop 119/120
	I0803 23:24:49.945730   37063 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0803 23:24:49.945785   37063 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0803 23:24:49.947681   37063 out.go:177] 
	W0803 23:24:49.948907   37063 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0803 23:24:49.948924   37063 out.go:239] * 
	* 
	W0803 23:24:49.951072   37063 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 23:24:49.952265   37063 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-076508 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr: exit status 3 (19.075857178s)

                                                
                                                
-- stdout --
	ha-076508
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076508-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:24:49.995863   37494 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:24:49.996114   37494 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:24:49.996122   37494 out.go:304] Setting ErrFile to fd 2...
	I0803 23:24:49.996127   37494 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:24:49.996287   37494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:24:49.996435   37494 out.go:298] Setting JSON to false
	I0803 23:24:49.996459   37494 mustload.go:65] Loading cluster: ha-076508
	I0803 23:24:49.996564   37494 notify.go:220] Checking for updates...
	I0803 23:24:49.996904   37494 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:24:49.996926   37494 status.go:255] checking status of ha-076508 ...
	I0803 23:24:49.997414   37494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:24:49.997489   37494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:24:50.016419   37494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32927
	I0803 23:24:50.016818   37494 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:24:50.017442   37494 main.go:141] libmachine: Using API Version  1
	I0803 23:24:50.017471   37494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:24:50.017795   37494 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:24:50.017983   37494 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:24:50.019687   37494 status.go:330] ha-076508 host status = "Running" (err=<nil>)
	I0803 23:24:50.019713   37494 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:24:50.020130   37494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:24:50.020170   37494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:24:50.035534   37494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43133
	I0803 23:24:50.035965   37494 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:24:50.036392   37494 main.go:141] libmachine: Using API Version  1
	I0803 23:24:50.036411   37494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:24:50.036733   37494 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:24:50.036945   37494 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:24:50.039595   37494 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:24:50.039969   37494 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:24:50.039992   37494 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:24:50.040095   37494 host.go:66] Checking if "ha-076508" exists ...
	I0803 23:24:50.040381   37494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:24:50.040418   37494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:24:50.054909   37494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0803 23:24:50.055295   37494 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:24:50.055751   37494 main.go:141] libmachine: Using API Version  1
	I0803 23:24:50.055771   37494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:24:50.056068   37494 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:24:50.056328   37494 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:24:50.056508   37494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:24:50.056540   37494 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:24:50.059466   37494 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:24:50.059873   37494 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:24:50.059902   37494 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:24:50.060034   37494 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:24:50.060205   37494 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:24:50.060349   37494 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:24:50.060485   37494 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:24:50.152193   37494 ssh_runner.go:195] Run: systemctl --version
	I0803 23:24:50.159366   37494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:24:50.179663   37494 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:24:50.179690   37494 api_server.go:166] Checking apiserver status ...
	I0803 23:24:50.179720   37494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:24:50.196714   37494 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5077/cgroup
	W0803 23:24:50.207590   37494 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5077/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:24:50.207657   37494 ssh_runner.go:195] Run: ls
	I0803 23:24:50.212379   37494 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:24:50.216783   37494 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:24:50.216811   37494 status.go:422] ha-076508 apiserver status = Running (err=<nil>)
	I0803 23:24:50.216820   37494 status.go:257] ha-076508 status: &{Name:ha-076508 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:24:50.216835   37494 status.go:255] checking status of ha-076508-m02 ...
	I0803 23:24:50.217127   37494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:24:50.217165   37494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:24:50.231702   37494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0803 23:24:50.232148   37494 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:24:50.232637   37494 main.go:141] libmachine: Using API Version  1
	I0803 23:24:50.232656   37494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:24:50.232946   37494 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:24:50.233100   37494 main.go:141] libmachine: (ha-076508-m02) Calling .GetState
	I0803 23:24:50.234608   37494 status.go:330] ha-076508-m02 host status = "Running" (err=<nil>)
	I0803 23:24:50.234623   37494 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:24:50.234903   37494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:24:50.234938   37494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:24:50.250071   37494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37681
	I0803 23:24:50.250518   37494 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:24:50.251006   37494 main.go:141] libmachine: Using API Version  1
	I0803 23:24:50.251025   37494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:24:50.251299   37494 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:24:50.251464   37494 main.go:141] libmachine: (ha-076508-m02) Calling .GetIP
	I0803 23:24:50.254299   37494 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:24:50.254732   37494 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:19:57 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:24:50.254759   37494 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:24:50.254918   37494 host.go:66] Checking if "ha-076508-m02" exists ...
	I0803 23:24:50.255209   37494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:24:50.255261   37494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:24:50.271216   37494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34173
	I0803 23:24:50.271555   37494 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:24:50.272046   37494 main.go:141] libmachine: Using API Version  1
	I0803 23:24:50.272069   37494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:24:50.272383   37494 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:24:50.272617   37494 main.go:141] libmachine: (ha-076508-m02) Calling .DriverName
	I0803 23:24:50.272783   37494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:24:50.272804   37494 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHHostname
	I0803 23:24:50.275989   37494 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:24:50.276463   37494 main.go:141] libmachine: (ha-076508-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:c8:3b", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:19:57 +0000 UTC Type:0 Mac:52:54:00:d6:c8:3b Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-076508-m02 Clientid:01:52:54:00:d6:c8:3b}
	I0803 23:24:50.276490   37494 main.go:141] libmachine: (ha-076508-m02) DBG | domain ha-076508-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:d6:c8:3b in network mk-ha-076508
	I0803 23:24:50.276662   37494 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHPort
	I0803 23:24:50.276827   37494 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHKeyPath
	I0803 23:24:50.276970   37494 main.go:141] libmachine: (ha-076508-m02) Calling .GetSSHUsername
	I0803 23:24:50.277112   37494 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m02/id_rsa Username:docker}
	I0803 23:24:50.358290   37494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:24:50.374694   37494 kubeconfig.go:125] found "ha-076508" server: "https://192.168.39.254:8443"
	I0803 23:24:50.374722   37494 api_server.go:166] Checking apiserver status ...
	I0803 23:24:50.374762   37494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:24:50.389858   37494 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0803 23:24:50.404501   37494 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:24:50.404562   37494 ssh_runner.go:195] Run: ls
	I0803 23:24:50.411115   37494 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:24:50.415429   37494 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:24:50.415452   37494 status.go:422] ha-076508-m02 apiserver status = Running (err=<nil>)
	I0803 23:24:50.415460   37494 status.go:257] ha-076508-m02 status: &{Name:ha-076508-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:24:50.415475   37494 status.go:255] checking status of ha-076508-m04 ...
	I0803 23:24:50.415779   37494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:24:50.415817   37494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:24:50.430682   37494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44521
	I0803 23:24:50.431151   37494 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:24:50.431652   37494 main.go:141] libmachine: Using API Version  1
	I0803 23:24:50.431667   37494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:24:50.431992   37494 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:24:50.432162   37494 main.go:141] libmachine: (ha-076508-m04) Calling .GetState
	I0803 23:24:50.433730   37494 status.go:330] ha-076508-m04 host status = "Running" (err=<nil>)
	I0803 23:24:50.433748   37494 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:24:50.434062   37494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:24:50.434097   37494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:24:50.449996   37494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43589
	I0803 23:24:50.450387   37494 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:24:50.450899   37494 main.go:141] libmachine: Using API Version  1
	I0803 23:24:50.450929   37494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:24:50.451268   37494 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:24:50.451486   37494 main.go:141] libmachine: (ha-076508-m04) Calling .GetIP
	I0803 23:24:50.454440   37494 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:24:50.454861   37494 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:22:16 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:24:50.454890   37494 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:24:50.455000   37494 host.go:66] Checking if "ha-076508-m04" exists ...
	I0803 23:24:50.455285   37494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:24:50.455328   37494 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:24:50.470602   37494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42267
	I0803 23:24:50.471031   37494 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:24:50.471453   37494 main.go:141] libmachine: Using API Version  1
	I0803 23:24:50.471472   37494 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:24:50.471799   37494 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:24:50.471996   37494 main.go:141] libmachine: (ha-076508-m04) Calling .DriverName
	I0803 23:24:50.472192   37494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:24:50.472214   37494 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHHostname
	I0803 23:24:50.474979   37494 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:24:50.475392   37494 main.go:141] libmachine: (ha-076508-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:1b:f6", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:22:16 +0000 UTC Type:0 Mac:52:54:00:5a:1b:f6 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-076508-m04 Clientid:01:52:54:00:5a:1b:f6}
	I0803 23:24:50.475418   37494 main.go:141] libmachine: (ha-076508-m04) DBG | domain ha-076508-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:5a:1b:f6 in network mk-ha-076508
	I0803 23:24:50.475557   37494 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHPort
	I0803 23:24:50.475716   37494 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHKeyPath
	I0803 23:24:50.475884   37494 main.go:141] libmachine: (ha-076508-m04) Calling .GetSSHUsername
	I0803 23:24:50.476045   37494 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508-m04/id_rsa Username:docker}
	W0803 23:25:09.029592   37494 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.121:22: connect: no route to host
	W0803 23:25:09.029671   37494 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	E0803 23:25:09.029685   37494 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0803 23:25:09.029692   37494 status.go:257] ha-076508-m04 status: &{Name:ha-076508-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:25:09.029709   37494 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-076508 -n ha-076508
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-076508 logs -n 25: (1.783307775s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-076508 ssh -n ha-076508-m02 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m03_ha-076508-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04:/home/docker/cp-test_ha-076508-m03_ha-076508-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m04 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m03_ha-076508-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-076508 cp testdata/cp-test.txt                                               | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile214764297/001/cp-test_ha-076508-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508:/home/docker/cp-test_ha-076508-m04_ha-076508.txt                      |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508 sudo cat                                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m04_ha-076508.txt                                |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m02:/home/docker/cp-test_ha-076508-m04_ha-076508-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m02 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m04_ha-076508-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m03:/home/docker/cp-test_ha-076508-m04_ha-076508-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n                                                                | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | ha-076508-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-076508 ssh -n ha-076508-m03 sudo cat                                         | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC | 03 Aug 24 23:12 UTC |
	|         | /home/docker/cp-test_ha-076508-m04_ha-076508-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-076508 node stop m02 -v=7                                                    | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:12 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-076508 node start m02 -v=7                                                   | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:15 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-076508 -v=7                                                          | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:16 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-076508 -v=7                                                               | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:16 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-076508 --wait=true -v=7                                                   | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:18 UTC | 03 Aug 24 23:22 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-076508                                                               | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:22 UTC |                     |
	| node    | ha-076508 node delete m03 -v=7                                                  | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:22 UTC | 03 Aug 24 23:22 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-076508 stop -v=7                                                             | ha-076508 | jenkins | v1.33.1 | 03 Aug 24 23:22 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:18:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:18:10.772185   35217 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:18:10.772415   35217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:18:10.772423   35217 out.go:304] Setting ErrFile to fd 2...
	I0803 23:18:10.772427   35217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:18:10.772611   35217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:18:10.773151   35217 out.go:298] Setting JSON to false
	I0803 23:18:10.774126   35217 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3635,"bootTime":1722723456,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:18:10.774187   35217 start.go:139] virtualization: kvm guest
	I0803 23:18:10.779445   35217 out.go:177] * [ha-076508] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:18:10.780986   35217 notify.go:220] Checking for updates...
	I0803 23:18:10.781036   35217 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:18:10.782487   35217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:18:10.783900   35217 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:18:10.784978   35217 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:18:10.786219   35217 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:18:10.787608   35217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:18:10.789142   35217 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:18:10.789226   35217 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:18:10.789708   35217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:18:10.789766   35217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:18:10.804359   35217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46709
	I0803 23:18:10.804867   35217 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:18:10.805479   35217 main.go:141] libmachine: Using API Version  1
	I0803 23:18:10.805501   35217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:18:10.805803   35217 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:18:10.805996   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:18:10.841624   35217 out.go:177] * Using the kvm2 driver based on existing profile
	I0803 23:18:10.842852   35217 start.go:297] selected driver: kvm2
	I0803 23:18:10.842864   35217 start.go:901] validating driver "kvm2" against &{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:18:10.842990   35217 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:18:10.843305   35217 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:18:10.843374   35217 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:18:10.859348   35217 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:18:10.860095   35217 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:18:10.860156   35217 cni.go:84] Creating CNI manager for ""
	I0803 23:18:10.860169   35217 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0803 23:18:10.860231   35217 start.go:340] cluster config:
	{Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:18:10.860343   35217 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:18:10.862440   35217 out.go:177] * Starting "ha-076508" primary control-plane node in "ha-076508" cluster
	I0803 23:18:10.863723   35217 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:18:10.863767   35217 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:18:10.863775   35217 cache.go:56] Caching tarball of preloaded images
	I0803 23:18:10.863880   35217 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:18:10.863891   35217 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:18:10.864004   35217 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/config.json ...
	I0803 23:18:10.864195   35217 start.go:360] acquireMachinesLock for ha-076508: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:18:10.864238   35217 start.go:364] duration metric: took 23.093µs to acquireMachinesLock for "ha-076508"
	I0803 23:18:10.864252   35217 start.go:96] Skipping create...Using existing machine configuration
	I0803 23:18:10.864259   35217 fix.go:54] fixHost starting: 
	I0803 23:18:10.864534   35217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:18:10.864562   35217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:18:10.880151   35217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39793
	I0803 23:18:10.880560   35217 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:18:10.881082   35217 main.go:141] libmachine: Using API Version  1
	I0803 23:18:10.881113   35217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:18:10.881474   35217 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:18:10.881694   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:18:10.881861   35217 main.go:141] libmachine: (ha-076508) Calling .GetState
	I0803 23:18:10.883590   35217 fix.go:112] recreateIfNeeded on ha-076508: state=Running err=<nil>
	W0803 23:18:10.883608   35217 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 23:18:10.885653   35217 out.go:177] * Updating the running kvm2 "ha-076508" VM ...
	I0803 23:18:10.887082   35217 machine.go:94] provisionDockerMachine start ...
	I0803 23:18:10.887104   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:18:10.887314   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:18:10.889840   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:10.890295   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:10.890322   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:10.890467   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:18:10.890643   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:10.890815   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:10.890956   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:18:10.891125   35217 main.go:141] libmachine: Using SSH client type: native
	I0803 23:18:10.891302   35217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:18:10.891313   35217 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 23:18:11.006636   35217 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076508
	
	I0803 23:18:11.006673   35217 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:18:11.006948   35217 buildroot.go:166] provisioning hostname "ha-076508"
	I0803 23:18:11.006974   35217 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:18:11.007237   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:18:11.009895   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.010267   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:11.010294   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.010514   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:18:11.010705   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.010871   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.011000   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:18:11.011173   35217 main.go:141] libmachine: Using SSH client type: native
	I0803 23:18:11.011388   35217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:18:11.011406   35217 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-076508 && echo "ha-076508" | sudo tee /etc/hostname
	I0803 23:18:11.147566   35217 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-076508
	
	I0803 23:18:11.147596   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:18:11.150388   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.150710   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:11.150754   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.150879   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:18:11.151081   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.151201   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.151347   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:18:11.151498   35217 main.go:141] libmachine: Using SSH client type: native
	I0803 23:18:11.151693   35217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:18:11.151713   35217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-076508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-076508/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-076508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:18:11.266305   35217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:18:11.266334   35217 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 23:18:11.266355   35217 buildroot.go:174] setting up certificates
	I0803 23:18:11.266386   35217 provision.go:84] configureAuth start
	I0803 23:18:11.266407   35217 main.go:141] libmachine: (ha-076508) Calling .GetMachineName
	I0803 23:18:11.266688   35217 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:18:11.269464   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.269868   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:11.269913   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.270068   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:18:11.272260   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.272625   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:11.272647   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.272788   35217 provision.go:143] copyHostCerts
	I0803 23:18:11.272814   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:18:11.272842   35217 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0803 23:18:11.272849   35217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:18:11.272913   35217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 23:18:11.273004   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:18:11.273022   35217 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0803 23:18:11.273029   35217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:18:11.273052   35217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 23:18:11.273152   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:18:11.273170   35217 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0803 23:18:11.273174   35217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:18:11.273203   35217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 23:18:11.273256   35217 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.ha-076508 san=[127.0.0.1 192.168.39.154 ha-076508 localhost minikube]
	I0803 23:18:11.566185   35217 provision.go:177] copyRemoteCerts
	I0803 23:18:11.566260   35217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:18:11.566281   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:18:11.569701   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.570079   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:11.570107   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.570342   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:18:11.570577   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.570853   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:18:11.571022   35217 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:18:11.657388   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:18:11.657483   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 23:18:11.689100   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:18:11.689177   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0803 23:18:11.732018   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:18:11.732089   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:18:11.759569   35217 provision.go:87] duration metric: took 493.171292ms to configureAuth
	I0803 23:18:11.759595   35217 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:18:11.759783   35217 config.go:182] Loaded profile config "ha-076508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:18:11.759843   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:18:11.762680   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.763170   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:18:11.763196   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:18:11.763394   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:18:11.763580   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.763736   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:18:11.763896   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:18:11.764048   35217 main.go:141] libmachine: Using SSH client type: native
	I0803 23:18:11.764248   35217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:18:11.764271   35217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:19:42.561598   35217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:19:42.561625   35217 machine.go:97] duration metric: took 1m31.674527918s to provisionDockerMachine
	I0803 23:19:42.561639   35217 start.go:293] postStartSetup for "ha-076508" (driver="kvm2")
	I0803 23:19:42.561652   35217 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:19:42.561669   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:19:42.561998   35217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:19:42.562031   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:19:42.565091   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.565565   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:42.565586   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.565758   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:19:42.565959   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:19:42.566133   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:19:42.566285   35217 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:19:42.652205   35217 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:19:42.656803   35217 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:19:42.656837   35217 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 23:19:42.656906   35217 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 23:19:42.656994   35217 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0803 23:19:42.657006   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0803 23:19:42.657108   35217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:19:42.666934   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:19:42.692079   35217 start.go:296] duration metric: took 130.427767ms for postStartSetup
	I0803 23:19:42.692120   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:19:42.692390   35217 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0803 23:19:42.692412   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:19:42.695019   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.695479   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:42.695505   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.695654   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:19:42.695831   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:19:42.696013   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:19:42.696165   35217 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	W0803 23:19:42.780089   35217 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0803 23:19:42.780115   35217 fix.go:56] duration metric: took 1m31.915855312s for fixHost
	I0803 23:19:42.780140   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:19:42.782497   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.782787   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:42.782814   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.782972   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:19:42.783169   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:19:42.783332   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:19:42.783455   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:19:42.783627   35217 main.go:141] libmachine: Using SSH client type: native
	I0803 23:19:42.783825   35217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I0803 23:19:42.783840   35217 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:19:42.894490   35217 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722727182.857481475
	
	I0803 23:19:42.894522   35217 fix.go:216] guest clock: 1722727182.857481475
	I0803 23:19:42.894534   35217 fix.go:229] Guest: 2024-08-03 23:19:42.857481475 +0000 UTC Remote: 2024-08-03 23:19:42.780124002 +0000 UTC m=+92.043524146 (delta=77.357473ms)
	I0803 23:19:42.894561   35217 fix.go:200] guest clock delta is within tolerance: 77.357473ms
	I0803 23:19:42.894569   35217 start.go:83] releasing machines lock for "ha-076508", held for 1m32.0303221s
	I0803 23:19:42.894598   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:19:42.894861   35217 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:19:42.897697   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.898097   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:42.898120   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.898274   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:19:42.898775   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:19:42.898936   35217 main.go:141] libmachine: (ha-076508) Calling .DriverName
	I0803 23:19:42.899002   35217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:19:42.899029   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:19:42.899150   35217 ssh_runner.go:195] Run: cat /version.json
	I0803 23:19:42.899170   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHHostname
	I0803 23:19:42.901730   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.901972   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.902120   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:42.902158   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.902236   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:19:42.902370   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:42.902396   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:42.902417   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:19:42.902587   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:19:42.902602   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHPort
	I0803 23:19:42.902862   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHKeyPath
	I0803 23:19:42.902882   35217 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:19:42.903033   35217 main.go:141] libmachine: (ha-076508) Calling .GetSSHUsername
	I0803 23:19:42.903142   35217 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/ha-076508/id_rsa Username:docker}
	I0803 23:19:43.002212   35217 ssh_runner.go:195] Run: systemctl --version
	I0803 23:19:43.008486   35217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:19:43.173022   35217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:19:43.179475   35217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:19:43.179553   35217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:19:43.189863   35217 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0803 23:19:43.189888   35217 start.go:495] detecting cgroup driver to use...
	I0803 23:19:43.189955   35217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:19:43.208212   35217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:19:43.222707   35217 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:19:43.222781   35217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:19:43.237429   35217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:19:43.251784   35217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:19:43.423755   35217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:19:43.587328   35217 docker.go:233] disabling docker service ...
	I0803 23:19:43.587408   35217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:19:43.607879   35217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:19:43.623456   35217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:19:43.782388   35217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:19:43.943805   35217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:19:43.959333   35217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:19:43.978184   35217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:19:43.978245   35217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:43.989450   35217 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:19:43.989516   35217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:44.000640   35217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:44.012442   35217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:44.024747   35217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:19:44.036592   35217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:44.048277   35217 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:44.059338   35217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:19:44.070557   35217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:19:44.080940   35217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:19:44.091032   35217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:19:44.252131   35217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:19:44.561108   35217 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:19:44.561180   35217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:19:44.566344   35217 start.go:563] Will wait 60s for crictl version
	I0803 23:19:44.566397   35217 ssh_runner.go:195] Run: which crictl
	I0803 23:19:44.570363   35217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:19:44.616230   35217 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:19:44.616330   35217 ssh_runner.go:195] Run: crio --version
	I0803 23:19:44.646596   35217 ssh_runner.go:195] Run: crio --version
	I0803 23:19:44.680323   35217 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:19:44.681842   35217 main.go:141] libmachine: (ha-076508) Calling .GetIP
	I0803 23:19:44.684311   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:44.684678   35217 main.go:141] libmachine: (ha-076508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:c7:ad", ip: ""} in network mk-ha-076508: {Iface:virbr1 ExpiryTime:2024-08-04 00:07:02 +0000 UTC Type:0 Mac:52:54:00:04:c7:ad Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-076508 Clientid:01:52:54:00:04:c7:ad}
	I0803 23:19:44.684704   35217 main.go:141] libmachine: (ha-076508) DBG | domain ha-076508 has defined IP address 192.168.39.154 and MAC address 52:54:00:04:c7:ad in network mk-ha-076508
	I0803 23:19:44.684953   35217 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:19:44.689974   35217 kubeadm.go:883] updating cluster {Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:19:44.690111   35217 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:19:44.690153   35217 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:19:44.734528   35217 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:19:44.734550   35217 crio.go:433] Images already preloaded, skipping extraction
	I0803 23:19:44.734599   35217 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:19:44.770239   35217 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:19:44.770261   35217 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:19:44.770269   35217 kubeadm.go:934] updating node { 192.168.39.154 8443 v1.30.3 crio true true} ...
	I0803 23:19:44.770359   35217 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-076508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:19:44.770423   35217 ssh_runner.go:195] Run: crio config
	I0803 23:19:44.822673   35217 cni.go:84] Creating CNI manager for ""
	I0803 23:19:44.822693   35217 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0803 23:19:44.822701   35217 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:19:44.822726   35217 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-076508 NodeName:ha-076508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:19:44.822854   35217 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-076508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:19:44.822871   35217 kube-vip.go:115] generating kube-vip config ...
	I0803 23:19:44.822909   35217 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:19:44.836452   35217 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:19:44.836554   35217 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:19:44.836606   35217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:19:44.846509   35217 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:19:44.846567   35217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0803 23:19:44.856915   35217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0803 23:19:44.874330   35217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:19:44.890840   35217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0803 23:19:44.907788   35217 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0803 23:19:44.924624   35217 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:19:44.929906   35217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:19:45.074540   35217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:19:45.091214   35217 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508 for IP: 192.168.39.154
	I0803 23:19:45.091237   35217 certs.go:194] generating shared ca certs ...
	I0803 23:19:45.091270   35217 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:19:45.091441   35217 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 23:19:45.091498   35217 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 23:19:45.091512   35217 certs.go:256] generating profile certs ...
	I0803 23:19:45.091639   35217 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/client.key
	I0803 23:19:45.091677   35217 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.86072f96
	I0803 23:19:45.091698   35217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.86072f96 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.154 192.168.39.245 192.168.39.86 192.168.39.254]
	I0803 23:19:45.213772   35217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.86072f96 ...
	I0803 23:19:45.213812   35217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.86072f96: {Name:mk62f406486b5ed6ce4c1b2b0ee058997bac4493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:19:45.214021   35217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.86072f96 ...
	I0803 23:19:45.214039   35217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.86072f96: {Name:mk6f8077e49387fd70d50520c2c5ae7745e98a7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:19:45.214146   35217 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt.86072f96 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt
	I0803 23:19:45.214318   35217 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key.86072f96 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key
	I0803 23:19:45.214505   35217 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key
	I0803 23:19:45.214525   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:19:45.214547   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:19:45.214569   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:19:45.214592   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:19:45.214614   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:19:45.214635   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:19:45.214657   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:19:45.214678   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:19:45.214755   35217 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0803 23:19:45.214809   35217 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0803 23:19:45.214825   35217 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 23:19:45.214864   35217 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 23:19:45.214910   35217 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:19:45.214948   35217 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 23:19:45.215022   35217 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:19:45.215075   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0803 23:19:45.215094   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:19:45.215112   35217 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0803 23:19:45.215645   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:19:45.241166   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:19:45.265456   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:19:45.290036   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 23:19:45.314751   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0803 23:19:45.339522   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 23:19:45.363709   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:19:45.387653   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/ha-076508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:19:45.411283   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0803 23:19:45.435219   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:19:45.458731   35217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0803 23:19:45.481497   35217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:19:45.497479   35217 ssh_runner.go:195] Run: openssl version
	I0803 23:19:45.503285   35217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0803 23:19:45.514240   35217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0803 23:19:45.518552   35217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:19:45.518601   35217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0803 23:19:45.524198   35217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:19:45.533925   35217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:19:45.545219   35217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:19:45.549844   35217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:19:45.549896   35217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:19:45.564635   35217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:19:45.588592   35217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0803 23:19:45.601330   35217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0803 23:19:45.606146   35217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:19:45.606198   35217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0803 23:19:45.612037   35217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0803 23:19:45.621952   35217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:19:45.626570   35217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 23:19:45.632426   35217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 23:19:45.638599   35217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 23:19:45.644292   35217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 23:19:45.650254   35217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 23:19:45.656300   35217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 23:19:45.662436   35217 kubeadm.go:392] StartCluster: {Name:ha-076508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-076508 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:19:45.662537   35217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:19:45.662595   35217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:19:45.701555   35217 cri.go:89] found id: "ea03444f92a5bf19e0ce17d312a9b2d8deb5e3ebeebf8c505971ca71a0293a00"
	I0803 23:19:45.701583   35217 cri.go:89] found id: "4e95648c5dc3a1f775ddfde37200b1596d49698951ae67d76ed6284717775639"
	I0803 23:19:45.701589   35217 cri.go:89] found id: "f4dd33ac454b5f75ca4d107721f23681f28c83681628e2f103cc43c6ddc11a9c"
	I0803 23:19:45.701596   35217 cri.go:89] found id: "e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac"
	I0803 23:19:45.701599   35217 cri.go:89] found id: "06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff"
	I0803 23:19:45.701602   35217 cri.go:89] found id: "6f7c5e8e3bdac4eb3896e0799c1baf348b250f64611d70ada7c8a6b0877f753d"
	I0803 23:19:45.701604   35217 cri.go:89] found id: "992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18"
	I0803 23:19:45.701607   35217 cri.go:89] found id: "c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa"
	I0803 23:19:45.701609   35217 cri.go:89] found id: "d05a03627874a6aa94e9d20285c30c669224806570e94d22c65230790534d31e"
	I0803 23:19:45.701614   35217 cri.go:89] found id: "1e30a0cbac1a3da7ed38331ca2526d5cafbc4ff40bee964ec813430db11385c8"
	I0803 23:19:45.701616   35217 cri.go:89] found id: "94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf"
	I0803 23:19:45.701619   35217 cri.go:89] found id: "4ce5fe2a1f3aa87481b9047cabaec03e59115e5d7d9845b8f6b4e6fa66d7531a"
	I0803 23:19:45.701621   35217 cri.go:89] found id: "f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e"
	I0803 23:19:45.701624   35217 cri.go:89] found id: ""
	I0803 23:19:45.701664   35217 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.680951778Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722727509680928064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9308d907-2f77-4c5a-a220-fe0c1d664c47 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.681699084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ebb9dc5-2a9a-4a8e-985d-9722e207a7ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.681755387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ebb9dc5-2a9a-4a8e-985d-9722e207a7ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.682169689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:236598d6cc96dca69a60e6e32b5a453847fed9637c85dbd11fa1ba2bf7321383,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722727269817023811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b13bb71b80bf46047ef88f46757564800a7a2535aa5079fa9784ca4ac3429a,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722727227813873785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd4e4f16126465ca737c857024d1f2db0f944028f31bd5747553f31370754a5,PodSandboxId:3891dc3f4b2ecd6dd910ebe20063442b5ecaae292c6066aa28aa97f8624efadc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722727222509498628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92be7ea582c5789fc13f7d2186937a906a77c3c86199ade3884f8794dd934cbf,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722727221153849954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbe21946a24fb9d2eb30e783aea6aba40e68331502b615b798d46e971d967e5,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722727218814230179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f0576bac335d0d6fe7b02ce30db9a7eed82d914dc45483c8f4261404c0e118,PodSandboxId:4b7530438acbe8725d62763fcef37f254f2c22767d27370ccb3b4b15a8a44300,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722727204111974477,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f78d994ab8633ef1f7eaa15b0ba3802,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179245fb3446448ad44e0afb97b692facef742ab27ffbe071c8d1b5f9490cea4,PodSandboxId:43bff21da0840e8a0ef6378b793a0d658406db540b7b44bcbce29e52bbc0c830,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722727189710703926,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:459da68d9d106b172622a35c2e958b255d2dc9debadad23018344c60967166eb,PodSandboxId:58242e7c3ff38a06d03350b2a9d89ff7e2ea60d999927711dc88e52aa88cae94,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722727189160084660,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54737b2c
b99edc800dbe76ceaa7788270800e736f77d26559d10602bc8e849be,PodSandboxId:44ffc9bf5974ce395088947cafb2c0e05c29e0d8795e375f06e717cc7cc97b23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188913714794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f205a672c44c8b8b6269744861e2f619021ea9ec9865ab56cdbbccbfd542a5d,PodSandboxId:23179463e06f350d7c559077eb7b964a580949cec04f2da6e216478bc852d01e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722727188769686198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5dc55be20e5b1700b2ae2c3af514f1b13579ed1cf94c9a3730524eecec7f25,PodSandboxId:dfebfcd977434e1b8efa4f6bf3ec9fc83663f2536b7441bd22fa7e927c3a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188768783949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca821f617ef36553aee64a7fe7a7652c81fd47880f2cb64509d96d86aff8c39,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722727188680603436,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7a230c984fabf95ce556afbc95971e3553df1b1c36a0a64c2621a6690e94c5,PodSandboxId:aa2124fce0aededed80d6e619c0beb0e3107cbbc02944b90594947dee8ebf590,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722727188491753876,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd826
0f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ff16c76b0f978ffba85eed0176fd7cc1a61a7f8d1d5a66106d6c40a78bd2d,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722727188546599853,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Ann
otations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722726687649229521,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annot
ations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482042590388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kube
rnetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482019751290,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722726470071314319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722726464614760494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722726444757482817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722726444586088932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ebb9dc5-2a9a-4a8e-985d-9722e207a7ac name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.727689490Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62cc6790-2826-4973-aeb3-6edbaf060a20 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.727780907Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62cc6790-2826-4973-aeb3-6edbaf060a20 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.729154895Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a61d0e1c-9038-4014-8c2a-754ec14e0898 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.729767718Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722727509729740819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a61d0e1c-9038-4014-8c2a-754ec14e0898 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.730654352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e05b31c2-45fe-4f9c-8a46-7f1d02911fb5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.730715208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e05b31c2-45fe-4f9c-8a46-7f1d02911fb5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.731517256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:236598d6cc96dca69a60e6e32b5a453847fed9637c85dbd11fa1ba2bf7321383,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722727269817023811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b13bb71b80bf46047ef88f46757564800a7a2535aa5079fa9784ca4ac3429a,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722727227813873785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd4e4f16126465ca737c857024d1f2db0f944028f31bd5747553f31370754a5,PodSandboxId:3891dc3f4b2ecd6dd910ebe20063442b5ecaae292c6066aa28aa97f8624efadc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722727222509498628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92be7ea582c5789fc13f7d2186937a906a77c3c86199ade3884f8794dd934cbf,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722727221153849954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbe21946a24fb9d2eb30e783aea6aba40e68331502b615b798d46e971d967e5,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722727218814230179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f0576bac335d0d6fe7b02ce30db9a7eed82d914dc45483c8f4261404c0e118,PodSandboxId:4b7530438acbe8725d62763fcef37f254f2c22767d27370ccb3b4b15a8a44300,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722727204111974477,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f78d994ab8633ef1f7eaa15b0ba3802,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179245fb3446448ad44e0afb97b692facef742ab27ffbe071c8d1b5f9490cea4,PodSandboxId:43bff21da0840e8a0ef6378b793a0d658406db540b7b44bcbce29e52bbc0c830,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722727189710703926,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:459da68d9d106b172622a35c2e958b255d2dc9debadad23018344c60967166eb,PodSandboxId:58242e7c3ff38a06d03350b2a9d89ff7e2ea60d999927711dc88e52aa88cae94,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722727189160084660,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54737b2c
b99edc800dbe76ceaa7788270800e736f77d26559d10602bc8e849be,PodSandboxId:44ffc9bf5974ce395088947cafb2c0e05c29e0d8795e375f06e717cc7cc97b23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188913714794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f205a672c44c8b8b6269744861e2f619021ea9ec9865ab56cdbbccbfd542a5d,PodSandboxId:23179463e06f350d7c559077eb7b964a580949cec04f2da6e216478bc852d01e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722727188769686198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5dc55be20e5b1700b2ae2c3af514f1b13579ed1cf94c9a3730524eecec7f25,PodSandboxId:dfebfcd977434e1b8efa4f6bf3ec9fc83663f2536b7441bd22fa7e927c3a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188768783949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca821f617ef36553aee64a7fe7a7652c81fd47880f2cb64509d96d86aff8c39,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722727188680603436,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7a230c984fabf95ce556afbc95971e3553df1b1c36a0a64c2621a6690e94c5,PodSandboxId:aa2124fce0aededed80d6e619c0beb0e3107cbbc02944b90594947dee8ebf590,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722727188491753876,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd826
0f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ff16c76b0f978ffba85eed0176fd7cc1a61a7f8d1d5a66106d6c40a78bd2d,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722727188546599853,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Ann
otations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722726687649229521,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annot
ations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482042590388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kube
rnetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482019751290,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722726470071314319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722726464614760494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722726444757482817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722726444586088932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e05b31c2-45fe-4f9c-8a46-7f1d02911fb5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.783835960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f8f340e-9a45-47be-a8a2-ae451efb13e2 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.783929965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f8f340e-9a45-47be-a8a2-ae451efb13e2 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.785416302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=846120be-9fff-4b1e-99c4-6ac9060b5e40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.785845972Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722727509785824567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=846120be-9fff-4b1e-99c4-6ac9060b5e40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.786725641Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d39891e-3969-4807-9d91-55fb087a7884 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.786783822Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d39891e-3969-4807-9d91-55fb087a7884 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.787186077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:236598d6cc96dca69a60e6e32b5a453847fed9637c85dbd11fa1ba2bf7321383,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722727269817023811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b13bb71b80bf46047ef88f46757564800a7a2535aa5079fa9784ca4ac3429a,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722727227813873785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd4e4f16126465ca737c857024d1f2db0f944028f31bd5747553f31370754a5,PodSandboxId:3891dc3f4b2ecd6dd910ebe20063442b5ecaae292c6066aa28aa97f8624efadc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722727222509498628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92be7ea582c5789fc13f7d2186937a906a77c3c86199ade3884f8794dd934cbf,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722727221153849954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbe21946a24fb9d2eb30e783aea6aba40e68331502b615b798d46e971d967e5,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722727218814230179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f0576bac335d0d6fe7b02ce30db9a7eed82d914dc45483c8f4261404c0e118,PodSandboxId:4b7530438acbe8725d62763fcef37f254f2c22767d27370ccb3b4b15a8a44300,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722727204111974477,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f78d994ab8633ef1f7eaa15b0ba3802,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179245fb3446448ad44e0afb97b692facef742ab27ffbe071c8d1b5f9490cea4,PodSandboxId:43bff21da0840e8a0ef6378b793a0d658406db540b7b44bcbce29e52bbc0c830,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722727189710703926,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:459da68d9d106b172622a35c2e958b255d2dc9debadad23018344c60967166eb,PodSandboxId:58242e7c3ff38a06d03350b2a9d89ff7e2ea60d999927711dc88e52aa88cae94,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722727189160084660,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54737b2c
b99edc800dbe76ceaa7788270800e736f77d26559d10602bc8e849be,PodSandboxId:44ffc9bf5974ce395088947cafb2c0e05c29e0d8795e375f06e717cc7cc97b23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188913714794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f205a672c44c8b8b6269744861e2f619021ea9ec9865ab56cdbbccbfd542a5d,PodSandboxId:23179463e06f350d7c559077eb7b964a580949cec04f2da6e216478bc852d01e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722727188769686198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5dc55be20e5b1700b2ae2c3af514f1b13579ed1cf94c9a3730524eecec7f25,PodSandboxId:dfebfcd977434e1b8efa4f6bf3ec9fc83663f2536b7441bd22fa7e927c3a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188768783949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca821f617ef36553aee64a7fe7a7652c81fd47880f2cb64509d96d86aff8c39,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722727188680603436,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7a230c984fabf95ce556afbc95971e3553df1b1c36a0a64c2621a6690e94c5,PodSandboxId:aa2124fce0aededed80d6e619c0beb0e3107cbbc02944b90594947dee8ebf590,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722727188491753876,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd826
0f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ff16c76b0f978ffba85eed0176fd7cc1a61a7f8d1d5a66106d6c40a78bd2d,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722727188546599853,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Ann
otations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722726687649229521,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annot
ations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482042590388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kube
rnetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482019751290,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722726470071314319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722726464614760494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722726444757482817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722726444586088932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d39891e-3969-4807-9d91-55fb087a7884 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.832506123Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=feb813f5-2eb7-4237-8fad-da030f7a929a name=/runtime.v1.RuntimeService/Version
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.832584581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=feb813f5-2eb7-4237-8fad-da030f7a929a name=/runtime.v1.RuntimeService/Version
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.834140765Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d94f179-a929-4011-abae-862eccc36b78 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.834728194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722727509834697573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d94f179-a929-4011-abae-862eccc36b78 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.835378893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5490397-6cc7-4487-bed5-9211d21b0b49 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.835436961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5490397-6cc7-4487-bed5-9211d21b0b49 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:25:09 ha-076508 crio[3764]: time="2024-08-03 23:25:09.835937193Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:236598d6cc96dca69a60e6e32b5a453847fed9637c85dbd11fa1ba2bf7321383,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722727269817023811,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b13bb71b80bf46047ef88f46757564800a7a2535aa5079fa9784ca4ac3429a,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722727227813873785,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Annotations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd4e4f16126465ca737c857024d1f2db0f944028f31bd5747553f31370754a5,PodSandboxId:3891dc3f4b2ecd6dd910ebe20063442b5ecaae292c6066aa28aa97f8624efadc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722727222509498628,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annotations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92be7ea582c5789fc13f7d2186937a906a77c3c86199ade3884f8794dd934cbf,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722727221153849954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbe21946a24fb9d2eb30e783aea6aba40e68331502b615b798d46e971d967e5,PodSandboxId:66a4a93f7c461457c0e8d49dda7bc0d17142e491f03330f61c7be9916a2a71c2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722727218814230179,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98f9062-eff5-48e1-b260-7e8acf8df124,},Annotations:map[string]string{io.kubernetes.container.hash: 4f8b9750,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f0576bac335d0d6fe7b02ce30db9a7eed82d914dc45483c8f4261404c0e118,PodSandboxId:4b7530438acbe8725d62763fcef37f254f2c22767d27370ccb3b4b15a8a44300,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722727204111974477,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f78d994ab8633ef1f7eaa15b0ba3802,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:179245fb3446448ad44e0afb97b692facef742ab27ffbe071c8d1b5f9490cea4,PodSandboxId:43bff21da0840e8a0ef6378b793a0d658406db540b7b44bcbce29e52bbc0c830,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722727189710703926,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:459da68d9d106b172622a35c2e958b255d2dc9debadad23018344c60967166eb,PodSandboxId:58242e7c3ff38a06d03350b2a9d89ff7e2ea60d999927711dc88e52aa88cae94,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722727189160084660,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54737b2c
b99edc800dbe76ceaa7788270800e736f77d26559d10602bc8e849be,PodSandboxId:44ffc9bf5974ce395088947cafb2c0e05c29e0d8795e375f06e717cc7cc97b23,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188913714794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kubernetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f205a672c44c8b8b6269744861e2f619021ea9ec9865ab56cdbbccbfd542a5d,PodSandboxId:23179463e06f350d7c559077eb7b964a580949cec04f2da6e216478bc852d01e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722727188769686198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5dc55be20e5b1700b2ae2c3af514f1b13579ed1cf94c9a3730524eecec7f25,PodSandboxId:dfebfcd977434e1b8efa4f6bf3ec9fc83663f2536b7441bd22fa7e927c3a1c58,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722727188768783949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ca821f617ef36553aee64a7fe7a7652c81fd47880f2cb64509d96d86aff8c39,PodSandboxId:6d66708bd1ffa12b54d9abe4b5fcc8d0943eeaaae7092ccd63a7d708b2b82116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722727188680603436,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-076508,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 5cfc7ccbbd8869f463d6c9d7f25c7b69,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7a230c984fabf95ce556afbc95971e3553df1b1c36a0a64c2621a6690e94c5,PodSandboxId:aa2124fce0aededed80d6e619c0beb0e3107cbbc02944b90594947dee8ebf590,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722727188491753876,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd826
0f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:346ff16c76b0f978ffba85eed0176fd7cc1a61a7f8d1d5a66106d6c40a78bd2d,PodSandboxId:f5c7d8fa119317912c2ec03e2dfdee38bb54d47f0aa8d5fd9b01fbb65b1d739c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722727188546599853,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03b5e048885e5fea318d5f49c66398f7,},Ann
otations:map[string]string{io.kubernetes.container.hash: aeaac023,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf2cd88f9d490aca44d2fe1495a26c55842e4ba75e118c772a813cd26a87d533,PodSandboxId:5999015810d6658882e005eeddcd5d0b8fe87d1e4424769ed4baeb8aaaaff492,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722726687649229521,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9mswn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bb1d5016-7a80-440d-8d04-9c51a1c84199,},Annot
ations:map[string]string{io.kubernetes.container.hash: 29f12e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac,PodSandboxId:ce24a7aa66e68461adb08cd502adc885c6b36544cc7c4ddab43d138cda86c9cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482042590388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4nns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c9c7190-c993-4b51-8ba6-62e3ab513836,},Annotations:map[string]string{io.kube
rnetes.container.hash: 150501f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff,PodSandboxId:b802406e46b4c07b6ad9078199d60382382cd03301e22286848f9c70693cb76b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722726482019751290,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jm52b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65abad67-6b05-4dbb-8d33-723306bee46f,},Annotations:map[string]string{io.kubernetes.container.hash: 87664c7c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18,PodSandboxId:f61ecf195fc7f868958c5a86d3ca806691c6821c59f1afc3b171192839830203,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722726470071314319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bpdht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 156017b0-941c-4b32-a73c-4798d48e5434,},Annotations:map[string]string{io.kubernetes.container.hash: 7b25940b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa,PodSandboxId:9f02c76f5b54ab18e7b8c75f26d0c756277edf4afe744b7e30de47e4034d033e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722726464614760494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jvj96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb6273b-31a8-48bc-8c5a-010363fc2a96,},Annotations:map[string]string{io.kubernetes.container.hash: ace577d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf,PodSandboxId:893b2ee90e13fe6298fdb223e5c351b5b83a6b0bd497faf852647b8e444061cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722726444757482817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb662281698a59578ac55a71345bbdf9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e,PodSandboxId:bf23341fb90dfbb23de40998a0663d7dc3a3614d5110341e2e73b4cac65f2bbb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722726444586088932,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-076508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8200b39f80bd8260f39151e31b90485,},Annotations:map[string]string{io.kubernetes.container.hash: addda88f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5490397-6cc7-4487-bed5-9211d21b0b49 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	236598d6cc96d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   66a4a93f7c461       storage-provisioner
	47b13bb71b80b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   f5c7d8fa11931       kube-apiserver-ha-076508
	2bd4e4f161264       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   3891dc3f4b2ec       busybox-fc5497c4f-9mswn
	92be7ea582c57       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   6d66708bd1ffa       kube-controller-manager-ha-076508
	4dbe21946a24f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   66a4a93f7c461       storage-provisioner
	71f0576bac335       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   4b7530438acbe       kube-vip-ha-076508
	179245fb34464       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   43bff21da0840       kube-proxy-jvj96
	459da68d9d106       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   58242e7c3ff38       kindnet-bpdht
	54737b2cb99ed       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   44ffc9bf5974c       coredns-7db6d8ff4d-g4nns
	2f205a672c44c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   23179463e06f3       kube-scheduler-ha-076508
	ee5dc55be20e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   dfebfcd977434       coredns-7db6d8ff4d-jm52b
	7ca821f617ef3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   6d66708bd1ffa       kube-controller-manager-ha-076508
	346ff16c76b0f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   f5c7d8fa11931       kube-apiserver-ha-076508
	7e7a230c984fa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   aa2124fce0aed       etcd-ha-076508
	bf2cd88f9d490       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   5999015810d66       busybox-fc5497c4f-9mswn
	e4d2591ba7d5b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   ce24a7aa66e68       coredns-7db6d8ff4d-g4nns
	06304cb4cc30c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   b802406e46b4c       coredns-7db6d8ff4d-jm52b
	992a3ac9b52e9       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    17 minutes ago      Exited              kindnet-cni               0                   f61ecf195fc7f       kindnet-bpdht
	c3100c43f706e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      17 minutes ago      Exited              kube-proxy                0                   9f02c76f5b54a       kube-proxy-jvj96
	94ea41effc5da       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      17 minutes ago      Exited              kube-scheduler            0                   893b2ee90e13f       kube-scheduler-ha-076508
	f127531f146d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   bf23341fb90df       etcd-ha-076508
	
	
	==> coredns [06304cb4cc30c653017e857d8e74880110f812101a082c1c98e41527e7daaaff] <==
	[INFO] 10.244.1.2:49197 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125884s
	[INFO] 10.244.1.2:42019 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010825s
	[INFO] 10.244.1.2:36505 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000274487s
	[INFO] 10.244.0.4:53634 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092514s
	[INFO] 10.244.0.4:37869 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148859s
	[INFO] 10.244.0.4:34409 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007386s
	[INFO] 10.244.2.2:37127 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00023425s
	[INFO] 10.244.1.2:45090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198771s
	[INFO] 10.244.1.2:35116 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097607s
	[INFO] 10.244.0.4:54156 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000252361s
	[INFO] 10.244.0.4:56228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118127s
	[INFO] 10.244.2.2:40085 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113887s
	[INFO] 10.244.2.2:41147 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000160253s
	[INFO] 10.244.1.2:34773 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000224176s
	[INFO] 10.244.1.2:41590 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094468s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1996&timeout=9m36s&timeoutSeconds=576&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1027044414]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:17:56.664) (total time: 13277ms):
	Trace[1027044414]: ---"Objects listed" error:Unauthorized 13274ms (23:18:09.939)
	Trace[1027044414]: [13.277685662s] [13.277685662s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [54737b2cb99edc800dbe76ceaa7788270800e736f77d26559d10602bc8e849be] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57922->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57914->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1242704582]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:20:00.696) (total time: 13205ms):
	Trace[1242704582]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57914->10.96.0.1:443: read: connection reset by peer 13205ms (23:20:13.902)
	Trace[1242704582]: [13.205590381s] [13.205590381s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:57914->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e4d2591ba7d5be0883fb4cb05d9db4b3eee744c4abea8c974c2b263d03e8f8ac] <==
	[INFO] 10.244.0.4:47543 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000144888s
	[INFO] 10.244.2.2:48453 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203019s
	[INFO] 10.244.2.2:47323 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155497s
	[INFO] 10.244.1.2:55651 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193064s
	[INFO] 10.244.1.2:54565 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106172s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=2019&timeout=8m23s&timeoutSeconds=503&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=2001&timeout=6m8s&timeoutSeconds=368&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=2019&timeout=5m38s&timeoutSeconds=338&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1340984613]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:17:56.425) (total time: 13513ms):
	Trace[1340984613]: ---"Objects listed" error:Unauthorized 13513ms (23:18:09.938)
	Trace[1340984613]: [13.513458585s] [13.513458585s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1192520348]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:17:56.493) (total time: 13445ms):
	Trace[1192520348]: ---"Objects listed" error:Unauthorized 13445ms (23:18:09.938)
	Trace[1192520348]: [13.445317861s] [13.445317861s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[2110706742]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:17:56.396) (total time: 13545ms):
	Trace[2110706742]: ---"Objects listed" error:Unauthorized 13544ms (23:18:09.940)
	Trace[2110706742]: [13.545030185s] [13.545030185s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ee5dc55be20e5b1700b2ae2c3af514f1b13579ed1cf94c9a3730524eecec7f25] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[4793203]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:19:53.823) (total time: 10001ms):
	Trace[4793203]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (23:20:03.825)
	Trace[4793203]: [10.001950976s] [10.001950976s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:43588->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:43588->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-076508
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_07_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:25:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:23:33 +0000   Sat, 03 Aug 2024 23:23:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:23:33 +0000   Sat, 03 Aug 2024 23:23:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:23:33 +0000   Sat, 03 Aug 2024 23:23:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:23:33 +0000   Sat, 03 Aug 2024 23:23:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.154
	  Hostname:    ha-076508
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f520408175b740ceb19f810f6b0739d9
	  System UUID:                f5204081-75b7-40ce-b19f-810f6b0739d9
	  Boot ID:                    1b5fc419-04f3-4085-a948-6aee54d39a0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9mswn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-g4nns             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-jm52b             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-076508                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-bpdht                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-076508             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-076508    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-jvj96                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-076508             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-076508                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 4m38s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           17m                    node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Warning  ContainerGCFailed        5m40s (x2 over 6m40s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m30s                  node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal   RegisteredNode           4m28s                  node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal   RegisteredNode           3m7s                   node-controller  Node ha-076508 event: Registered Node ha-076508 in Controller
	  Normal   NodeNotReady             110s                   node-controller  Node ha-076508 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     97s (x2 over 17m)      kubelet          Node ha-076508 status is now: NodeHasSufficientPID
	  Normal   NodeReady                97s (x2 over 17m)      kubelet          Node ha-076508 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    97s (x2 over 17m)      kubelet          Node ha-076508 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  97s (x2 over 17m)      kubelet          Node ha-076508 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-076508-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_09_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:09:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:25:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:21:15 +0000   Sat, 03 Aug 2024 23:20:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:21:15 +0000   Sat, 03 Aug 2024 23:20:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:21:15 +0000   Sat, 03 Aug 2024 23:20:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:21:15 +0000   Sat, 03 Aug 2024 23:20:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-076508-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e37b92099f364fcfb7894de373a13dc0
	  System UUID:                e37b9209-9f36-4fcf-b789-4de373a13dc0
	  Boot ID:                    81900771-be6c-4a7e-92b3-1dcdfcd12a0e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wlr2g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-076508-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-kw254                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-076508-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-076508-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-hkfgl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-076508-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-076508-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m32s                kube-proxy       
	  Normal  Starting                 15m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)    kubelet          Node ha-076508-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)    kubelet          Node ha-076508-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)    kubelet          Node ha-076508-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                  node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  RegisteredNode           15m                  node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  RegisteredNode           13m                  node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  NodeNotReady             11m                  node-controller  Node ha-076508-m02 status is now: NodeNotReady
	  Normal  Starting                 5m4s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m3s (x8 over 5m4s)  kubelet          Node ha-076508-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m3s (x8 over 5m4s)  kubelet          Node ha-076508-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m3s (x7 over 5m4s)  kubelet          Node ha-076508-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m30s                node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  RegisteredNode           4m28s                node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	  Normal  RegisteredNode           3m7s                 node-controller  Node ha-076508-m02 event: Registered Node ha-076508-m02 in Controller
	
	
	Name:               ha-076508-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-076508-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=ha-076508
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_12_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:12:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-076508-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:22:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 03 Aug 2024 23:22:22 +0000   Sat, 03 Aug 2024 23:23:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 03 Aug 2024 23:22:22 +0000   Sat, 03 Aug 2024 23:23:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 03 Aug 2024 23:22:22 +0000   Sat, 03 Aug 2024 23:23:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 03 Aug 2024 23:22:22 +0000   Sat, 03 Aug 2024 23:23:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-076508-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 59e0fe8296564277a8f997ffad0b72b7
	  System UUID:                59e0fe82-9656-4277-a8f9-97ffad0b72b7
	  Boot ID:                    ac1fd3ea-7219-4bf5-b0e7-785a8c9a8071
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kvpgn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-hdkw5              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-ff944           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-076508-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-076508-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-076508-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-076508-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m30s                  node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal   RegisteredNode           4m28s                  node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal   NodeNotReady             3m50s                  node-controller  Node ha-076508-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m7s                   node-controller  Node ha-076508-m04 event: Registered Node ha-076508-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-076508-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-076508-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-076508-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-076508-m04 has been rebooted, boot id: ac1fd3ea-7219-4bf5-b0e7-785a8c9a8071
	  Normal   NodeReady                2m48s                  kubelet          Node ha-076508-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s                   node-controller  Node ha-076508-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.547215] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.057969] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056174] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.182365] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.110609] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.279600] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.413542] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.061522] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.061905] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +1.335796] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.036158] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.075573] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.924842] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.636926] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 3 23:09] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 3 23:19] systemd-fstab-generator[3682]: Ignoring "noauto" option for root device
	[  +0.164245] systemd-fstab-generator[3694]: Ignoring "noauto" option for root device
	[  +0.196396] systemd-fstab-generator[3708]: Ignoring "noauto" option for root device
	[  +0.157714] systemd-fstab-generator[3720]: Ignoring "noauto" option for root device
	[  +0.306575] systemd-fstab-generator[3748]: Ignoring "noauto" option for root device
	[  +0.825945] systemd-fstab-generator[3850]: Ignoring "noauto" option for root device
	[  +3.462450] kauditd_printk_skb: 130 callbacks suppressed
	[Aug 3 23:20] kauditd_printk_skb: 78 callbacks suppressed
	[ +23.809708] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [7e7a230c984fabf95ce556afbc95971e3553df1b1c36a0a64c2621a6690e94c5] <==
	{"level":"info","ts":"2024-08-03T23:21:45.956792Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:21:45.971918Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"10fb7b0a157fc334","to":"6c6e355cb97cea1a","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-03T23:21:45.97197Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:21:45.977904Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"10fb7b0a157fc334","to":"6c6e355cb97cea1a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-03T23:21:45.977968Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"warn","ts":"2024-08-03T23:21:49.805348Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6c6e355cb97cea1a","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-03T23:21:51.678483Z","caller":"traceutil/trace.go:171","msg":"trace[876846100] transaction","detail":"{read_only:false; response_revision:2552; number_of_response:1; }","duration":"138.46573ms","start":"2024-08-03T23:21:51.539972Z","end":"2024-08-03T23:21:51.678438Z","steps":["trace[876846100] 'process raft request'  (duration: 138.148425ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-03T23:22:35.96162Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.86:57744","server-name":"","error":"read tcp 192.168.39.154:2379->192.168.39.86:57744: read: connection reset by peer"}
	{"level":"info","ts":"2024-08-03T23:22:35.975785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 switched to configuration voters=(1223707007001805620 4380046997212668281)"}
	{"level":"info","ts":"2024-08-03T23:22:35.978244Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"bd4b2769e12dd4ff","local-member-id":"10fb7b0a157fc334","removed-remote-peer-id":"6c6e355cb97cea1a","removed-remote-peer-urls":["https://192.168.39.86:2380"]}
	{"level":"info","ts":"2024-08-03T23:22:35.978476Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"warn","ts":"2024-08-03T23:22:35.979003Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:22:35.979144Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"warn","ts":"2024-08-03T23:22:35.979658Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:22:35.979762Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:22:35.980049Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"warn","ts":"2024-08-03T23:22:35.980377Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a","error":"context canceled"}
	{"level":"warn","ts":"2024-08-03T23:22:35.980433Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"6c6e355cb97cea1a","error":"failed to read 6c6e355cb97cea1a on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-03T23:22:35.980471Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"warn","ts":"2024-08-03T23:22:35.980666Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a","error":"context canceled"}
	{"level":"info","ts":"2024-08-03T23:22:35.980687Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:22:35.980698Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:22:35.980709Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"10fb7b0a157fc334","removed-remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"warn","ts":"2024-08-03T23:22:35.996556Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"10fb7b0a157fc334","remote-peer-id-stream-handler":"10fb7b0a157fc334","remote-peer-id-from":"6c6e355cb97cea1a"}
	{"level":"warn","ts":"2024-08-03T23:22:36.000452Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"10fb7b0a157fc334","remote-peer-id-stream-handler":"10fb7b0a157fc334","remote-peer-id-from":"6c6e355cb97cea1a"}
	
	
	==> etcd [f127531f146d9a09b43c94bbc6eb2088a57038da279f63e5742865665fe51d0e] <==
	{"level":"warn","ts":"2024-08-03T23:18:11.919609Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-03T23:18:11.538207Z","time spent":"381.388563ms","remote":"127.0.0.1:48170","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:500 "}
	2024/08/03 23:18:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-03T23:18:11.923948Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14066027079178105826,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-03T23:18:12.18516Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3cc90d899860a179","rtt":"1.25816ms","error":"dial tcp 192.168.39.245:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-03T23:18:12.187502Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-03T23:18:12.187547Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-03T23:18:12.189055Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"10fb7b0a157fc334","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-03T23:18:12.189252Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3cc90d899860a179"}
	{"level":"warn","ts":"2024-08-03T23:18:12.195325Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3cc90d899860a179","rtt":"12.208499ms","error":"dial tcp 192.168.39.245:2380: connect: no route to host"}
	{"level":"info","ts":"2024-08-03T23:18:12.196381Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3cc90d899860a179"}
	{"level":"info","ts":"2024-08-03T23:18:12.196449Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3cc90d899860a179"}
	{"level":"info","ts":"2024-08-03T23:18:12.196544Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179"}
	{"level":"info","ts":"2024-08-03T23:18:12.196601Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179"}
	{"level":"info","ts":"2024-08-03T23:18:12.196635Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"10fb7b0a157fc334","remote-peer-id":"3cc90d899860a179"}
	{"level":"info","ts":"2024-08-03T23:18:12.196644Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3cc90d899860a179"}
	{"level":"info","ts":"2024-08-03T23:18:12.19665Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.196682Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.196721Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.196819Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.196895Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.196962Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"10fb7b0a157fc334","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.196995Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6c6e355cb97cea1a"}
	{"level":"info","ts":"2024-08-03T23:18:12.200633Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-08-03T23:18:12.200756Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-08-03T23:18:12.200783Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-076508","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"]}
	
	
	==> kernel <==
	 23:25:10 up 18 min,  0 users,  load average: 0.38, 0.63, 0.40
	Linux ha-076508 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [459da68d9d106b172622a35c2e958b255d2dc9debadad23018344c60967166eb] <==
	I0803 23:24:30.297534       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:24:40.298351       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:24:40.298432       1 main.go:299] handling current node
	I0803 23:24:40.298451       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:24:40.298457       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:24:40.298604       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:24:40.298628       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:24:50.289362       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:24:50.289476       1 main.go:299] handling current node
	I0803 23:24:50.289507       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:24:50.289526       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:24:50.289747       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:24:50.289784       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:25:00.293945       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:25:00.294372       1 main.go:299] handling current node
	I0803 23:25:00.294417       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:25:00.294441       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:25:00.294666       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:25:00.294697       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:25:10.290441       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:25:10.290486       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:25:10.290726       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:25:10.290775       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:25:10.290869       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:25:10.291350       1 main.go:299] handling current node
	
	
	==> kindnet [992a3ac9b52e9fa1f233b5b8b13e7264e2b2843d01e0df6cf8d32f75dd390a18] <==
	I0803 23:17:51.270446       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:17:51.270714       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:17:51.272182       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:17:51.272234       1 main.go:299] handling current node
	I0803 23:17:51.272266       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:17:51.272325       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:17:51.272416       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:17:51.272440       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:18:01.271130       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:18:01.271403       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	I0803 23:18:01.271683       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:18:01.271716       1 main.go:299] handling current node
	I0803 23:18:01.271756       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:18:01.271773       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:18:01.271835       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:18:01.271854       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	E0803 23:18:09.942061       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	I0803 23:18:11.270506       1 main.go:295] Handling node with IPs: map[192.168.39.154:{}]
	I0803 23:18:11.270563       1 main.go:299] handling current node
	I0803 23:18:11.270579       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0803 23:18:11.270588       1 main.go:322] Node ha-076508-m02 has CIDR [10.244.1.0/24] 
	I0803 23:18:11.270735       1 main.go:295] Handling node with IPs: map[192.168.39.86:{}]
	I0803 23:18:11.270763       1 main.go:322] Node ha-076508-m03 has CIDR [10.244.2.0/24] 
	I0803 23:18:11.270820       1 main.go:295] Handling node with IPs: map[192.168.39.121:{}]
	I0803 23:18:11.270845       1 main.go:322] Node ha-076508-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [346ff16c76b0f978ffba85eed0176fd7cc1a61a7f8d1d5a66106d6c40a78bd2d] <==
	I0803 23:19:49.232935       1 options.go:221] external host was not specified, using 192.168.39.154
	I0803 23:19:49.234418       1 server.go:148] Version: v1.30.3
	I0803 23:19:49.234543       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:19:50.054345       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0803 23:19:50.056966       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0803 23:19:50.062456       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0803 23:19:50.062565       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0803 23:19:50.063659       1 instance.go:299] Using reconciler: lease
	W0803 23:20:10.051982       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0803 23:20:10.053073       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0803 23:20:10.065245       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	W0803 23:20:10.066083       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-apiserver [47b13bb71b80bf46047ef88f46757564800a7a2535aa5079fa9784ca4ac3429a] <==
	I0803 23:20:29.962437       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0803 23:20:29.963268       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0803 23:20:29.963420       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0803 23:20:30.020849       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0803 23:20:30.020887       1 policy_source.go:224] refreshing policies
	I0803 23:20:30.040355       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0803 23:20:30.045050       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0803 23:20:30.051751       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0803 23:20:30.051927       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0803 23:20:30.052239       1 shared_informer.go:320] Caches are synced for configmaps
	I0803 23:20:30.052395       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0803 23:20:30.053216       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0803 23:20:30.053338       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0803 23:20:30.063890       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0803 23:20:30.064014       1 aggregator.go:165] initial CRD sync complete...
	I0803 23:20:30.064057       1 autoregister_controller.go:141] Starting autoregister controller
	I0803 23:20:30.064065       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0803 23:20:30.064072       1 cache.go:39] Caches are synced for autoregister controller
	W0803 23:20:30.067253       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.245]
	I0803 23:20:30.069185       1 controller.go:615] quota admission added evaluator for: endpoints
	I0803 23:20:30.093935       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0803 23:20:30.098111       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0803 23:20:30.099742       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0803 23:20:30.953728       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0803 23:20:31.424449       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.154 192.168.39.245]
	
	
	==> kube-controller-manager [7ca821f617ef36553aee64a7fe7a7652c81fd47880f2cb64509d96d86aff8c39] <==
	I0803 23:19:50.496661       1 serving.go:380] Generated self-signed cert in-memory
	I0803 23:19:50.759536       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0803 23:19:50.759579       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:19:50.761142       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0803 23:19:50.761862       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0803 23:19:50.762001       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0803 23:19:50.762094       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0803 23:20:11.072022       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.154:8443/healthz\": dial tcp 192.168.39.154:8443: connect: connection refused"
	
	
	==> kube-controller-manager [92be7ea582c5789fc13f7d2186937a906a77c3c86199ade3884f8794dd934cbf] <==
	E0803 23:23:02.529648       1 gc_controller.go:153] "Failed to get node" err="node \"ha-076508-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076508-m03"
	I0803 23:23:20.849916       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-076508-m04"
	I0803 23:23:20.897671       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.99906ms"
	I0803 23:23:20.897807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.916µs"
	I0803 23:23:20.923555       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.323943ms"
	I0803 23:23:20.925639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="97.743µs"
	I0803 23:23:20.974431       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.736158ms"
	I0803 23:23:20.974523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.339µs"
	I0803 23:23:21.071190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.821729ms"
	I0803 23:23:21.072634       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="574.532µs"
	E0803 23:23:22.530157       1 gc_controller.go:153] "Failed to get node" err="node \"ha-076508-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076508-m03"
	E0803 23:23:22.530236       1 gc_controller.go:153] "Failed to get node" err="node \"ha-076508-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076508-m03"
	E0803 23:23:22.530243       1 gc_controller.go:153] "Failed to get node" err="node \"ha-076508-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076508-m03"
	E0803 23:23:22.530248       1 gc_controller.go:153] "Failed to get node" err="node \"ha-076508-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076508-m03"
	E0803 23:23:22.530253       1 gc_controller.go:153] "Failed to get node" err="node \"ha-076508-m03\" not found" logger="pod-garbage-collector-controller" node="ha-076508-m03"
	I0803 23:23:22.688473       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.236133ms"
	I0803 23:23:22.688591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.705µs"
	I0803 23:23:40.897788       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-79nmw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-79nmw\": the object has been modified; please apply your changes to the latest version and try again"
	I0803 23:23:40.899506       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"64336eba-dc9f-4608-9026-92a954c040e5", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-79nmw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-79nmw": the object has been modified; please apply your changes to the latest version and try again
	I0803 23:23:40.921227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.235952ms"
	I0803 23:23:40.921520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="164.258µs"
	I0803 23:23:40.953241       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.690021ms"
	I0803 23:23:40.953469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="119.347µs"
	I0803 23:23:41.093629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.203353ms"
	I0803 23:23:41.093777       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.27µs"
	
	
	==> kube-proxy [179245fb3446448ad44e0afb97b692facef742ab27ffbe071c8d1b5f9490cea4] <==
	I0803 23:19:50.828334       1 server_linux.go:69] "Using iptables proxy"
	E0803 23:19:51.694913       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0803 23:19:54.765994       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0803 23:19:57.839246       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0803 23:20:03.981894       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0803 23:20:13.199142       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0803 23:20:31.629710       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0803 23:20:31.630014       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0803 23:20:31.707586       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 23:20:31.707796       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 23:20:31.707838       1 server_linux.go:165] "Using iptables Proxier"
	I0803 23:20:31.712996       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 23:20:31.713768       1 server.go:872] "Version info" version="v1.30.3"
	I0803 23:20:31.714103       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:20:31.718215       1 config.go:192] "Starting service config controller"
	I0803 23:20:31.718393       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 23:20:31.718495       1 config.go:101] "Starting endpoint slice config controller"
	I0803 23:20:31.718518       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 23:20:31.720528       1 config.go:319] "Starting node config controller"
	I0803 23:20:31.720635       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 23:20:31.820437       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 23:20:31.820517       1 shared_informer.go:320] Caches are synced for service config
	I0803 23:20:31.828381       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c3100c43f706e69c4b66f4caff36304f69fa1fc25c488b422ad481bf533cbffa] <==
	E0803 23:16:56.333682       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:16:56.333937       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:16:56.334017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:16:56.334504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:16:56.334600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:02.733698       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:02.734194       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:02.733946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:02.734412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:02.734017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:02.734649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:11.951656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:11.951877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:15.022782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:15.022974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:15.023085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:15.023120       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:30.383237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:30.383423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:36.525911       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:36.525986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1997": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:17:39.598436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:17:39.598501       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1962": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:18:04.174801       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:18:04.175717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-076508&resourceVersion=2004": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [2f205a672c44c8b8b6269744861e2f619021ea9ec9865ab56cdbbccbfd542a5d] <==
	W0803 23:20:25.723325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.154:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	E0803 23:20:25.723474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.154:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	W0803 23:20:26.269714       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.154:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	E0803 23:20:26.269851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.154:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	W0803 23:20:26.447641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.154:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	E0803 23:20:26.447825       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.154:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	W0803 23:20:26.758504       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.154:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	E0803 23:20:26.758618       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.154:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	W0803 23:20:26.928266       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.154:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	E0803 23:20:26.928423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.154:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	W0803 23:20:27.355087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.154:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	E0803 23:20:27.355247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.154:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.154:8443: connect: connection refused
	W0803 23:20:30.015729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0803 23:20:30.015963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0803 23:20:30.016157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0803 23:20:30.016196       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0803 23:20:30.016327       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 23:20:30.016381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0803 23:20:30.016477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:20:30.016508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0803 23:20:30.016591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 23:20:30.016657       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 23:20:30.016784       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0803 23:20:30.016817       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0803 23:20:46.881494       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [94ea41effc5da698ac24bdaf24aa0efbac19f2c156a2a360079bcb7e16058fbf] <==
	W0803 23:18:04.503185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 23:18:04.503379       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0803 23:18:04.548847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0803 23:18:04.548968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0803 23:18:05.030395       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:18:05.030489       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0803 23:18:05.037811       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 23:18:05.037861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 23:18:05.041103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0803 23:18:05.041193       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0803 23:18:05.194557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0803 23:18:05.194610       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0803 23:18:05.399732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0803 23:18:05.399821       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0803 23:18:05.488214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0803 23:18:05.488307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0803 23:18:05.863523       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 23:18:05.863645       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0803 23:18:10.280529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 23:18:10.280559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0803 23:18:11.289196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 23:18:11.289373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0803 23:18:11.763728       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:18:11.763757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:18:11.889862       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 03 23:23:24 ha-076508 kubelet[1368]: E0803 23:23:24.910416    1368 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-076508\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
	Aug 03 23:23:28 ha-076508 kubelet[1368]: E0803 23:23:28.713618    1368 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-076508?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 03 23:23:30 ha-076508 kubelet[1368]: E0803 23:23:30.836454    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:23:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:23:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:23:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:23:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:23:33 ha-076508 kubelet[1368]: W0803 23:23:33.282377    1368 reflector.go:470] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 03 23:23:33 ha-076508 kubelet[1368]: E0803 23:23:33.282500    1368 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-076508\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-076508?timeout=10s\": http2: client connection lost"
	Aug 03 23:23:33 ha-076508 kubelet[1368]: E0803 23:23:33.282613    1368 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-076508?timeout=10s\": http2: client connection lost"
	Aug 03 23:23:33 ha-076508 kubelet[1368]: I0803 23:23:33.282997    1368 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Aug 03 23:23:33 ha-076508 kubelet[1368]: W0803 23:23:33.282382    1368 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 03 23:23:33 ha-076508 kubelet[1368]: W0803 23:23:33.282406    1368 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 03 23:23:33 ha-076508 kubelet[1368]: W0803 23:23:33.282421    1368 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 03 23:23:33 ha-076508 kubelet[1368]: W0803 23:23:33.282528    1368 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 03 23:23:33 ha-076508 kubelet[1368]: W0803 23:23:33.282543    1368 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 03 23:23:33 ha-076508 kubelet[1368]: W0803 23:23:33.282567    1368 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 03 23:23:33 ha-076508 kubelet[1368]: W0803 23:23:33.282581    1368 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 03 23:23:33 ha-076508 kubelet[1368]: W0803 23:23:33.282638    1368 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Aug 03 23:23:34 ha-076508 kubelet[1368]: I0803 23:23:34.483623    1368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-076508" podStartSLOduration=133.483584029 podStartE2EDuration="2m13.483584029s" podCreationTimestamp="2024-08-03 23:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-03 23:21:30.827586682 +0000 UTC m=+840.160093395" watchObservedRunningTime="2024-08-03 23:23:34.483584029 +0000 UTC m=+963.816090761"
	Aug 03 23:24:30 ha-076508 kubelet[1368]: E0803 23:24:30.837028    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:24:30 ha-076508 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:24:30 ha-076508 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:24:30 ha-076508 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:24:30 ha-076508 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 23:25:09.363017   37656 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19364-9607/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-076508 -n ha-076508
helpers_test.go:261: (dbg) Run:  kubectl --context ha-076508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-626202
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-626202
E0803 23:40:58.007814   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-626202: exit status 82 (2m1.842107905s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-626202-m03"  ...
	* Stopping node "multinode-626202-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-626202" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-626202 --wait=true -v=8 --alsologtostderr
E0803 23:43:27.618799   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:44:01.057374   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-626202 --wait=true -v=8 --alsologtostderr: (3m20.093423468s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-626202
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-626202 -n multinode-626202
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-626202 logs -n 25: (1.581832414s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp multinode-626202-m02:/home/docker/cp-test.txt                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile807028884/001/cp-test_multinode-626202-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp multinode-626202-m02:/home/docker/cp-test.txt                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202:/home/docker/cp-test_multinode-626202-m02_multinode-626202.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n multinode-626202 sudo cat                                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /home/docker/cp-test_multinode-626202-m02_multinode-626202.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp multinode-626202-m02:/home/docker/cp-test.txt                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03:/home/docker/cp-test_multinode-626202-m02_multinode-626202-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n multinode-626202-m03 sudo cat                                   | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /home/docker/cp-test_multinode-626202-m02_multinode-626202-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp testdata/cp-test.txt                                                | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp multinode-626202-m03:/home/docker/cp-test.txt                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile807028884/001/cp-test_multinode-626202-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp multinode-626202-m03:/home/docker/cp-test.txt                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202:/home/docker/cp-test_multinode-626202-m03_multinode-626202.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n multinode-626202 sudo cat                                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /home/docker/cp-test_multinode-626202-m03_multinode-626202.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp multinode-626202-m03:/home/docker/cp-test.txt                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m02:/home/docker/cp-test_multinode-626202-m03_multinode-626202-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n multinode-626202-m02 sudo cat                                   | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /home/docker/cp-test_multinode-626202-m03_multinode-626202-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-626202 node stop m03                                                          | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	| node    | multinode-626202 node start                                                             | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:40 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-626202                                                                | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:40 UTC |                     |
	| stop    | -p multinode-626202                                                                     | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:40 UTC |                     |
	| start   | -p multinode-626202                                                                     | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:42 UTC | 03 Aug 24 23:45 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-626202                                                                | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:45 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:42:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:42:29.599266   47076 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:42:29.599380   47076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:42:29.599388   47076 out.go:304] Setting ErrFile to fd 2...
	I0803 23:42:29.599392   47076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:42:29.599604   47076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:42:29.600133   47076 out.go:298] Setting JSON to false
	I0803 23:42:29.600998   47076 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5094,"bootTime":1722723456,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:42:29.601058   47076 start.go:139] virtualization: kvm guest
	I0803 23:42:29.603308   47076 out.go:177] * [multinode-626202] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:42:29.604754   47076 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:42:29.604753   47076 notify.go:220] Checking for updates...
	I0803 23:42:29.606469   47076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:42:29.607959   47076 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:42:29.609229   47076 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:42:29.610595   47076 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:42:29.611892   47076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:42:29.613701   47076 config.go:182] Loaded profile config "multinode-626202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:42:29.613793   47076 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:42:29.614214   47076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:42:29.614258   47076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:42:29.629457   47076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37337
	I0803 23:42:29.629864   47076 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:42:29.630348   47076 main.go:141] libmachine: Using API Version  1
	I0803 23:42:29.630371   47076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:42:29.630750   47076 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:42:29.630921   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:42:29.667533   47076 out.go:177] * Using the kvm2 driver based on existing profile
	I0803 23:42:29.669044   47076 start.go:297] selected driver: kvm2
	I0803 23:42:29.669062   47076 start.go:901] validating driver "kvm2" against &{Name:multinode-626202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-626202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:42:29.669232   47076 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:42:29.669625   47076 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:42:29.669696   47076 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:42:29.685554   47076 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:42:29.686304   47076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:42:29.686334   47076 cni.go:84] Creating CNI manager for ""
	I0803 23:42:29.686342   47076 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0803 23:42:29.686391   47076 start.go:340] cluster config:
	{Name:multinode-626202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-626202 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:42:29.686508   47076 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:42:29.688547   47076 out.go:177] * Starting "multinode-626202" primary control-plane node in "multinode-626202" cluster
	I0803 23:42:29.689938   47076 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:42:29.689970   47076 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:42:29.689977   47076 cache.go:56] Caching tarball of preloaded images
	I0803 23:42:29.690048   47076 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:42:29.690061   47076 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:42:29.690187   47076 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/config.json ...
	I0803 23:42:29.690377   47076 start.go:360] acquireMachinesLock for multinode-626202: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:42:29.690418   47076 start.go:364] duration metric: took 22.915µs to acquireMachinesLock for "multinode-626202"
	I0803 23:42:29.690428   47076 start.go:96] Skipping create...Using existing machine configuration
	I0803 23:42:29.690436   47076 fix.go:54] fixHost starting: 
	I0803 23:42:29.690675   47076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:42:29.690705   47076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:42:29.705567   47076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36317
	I0803 23:42:29.706067   47076 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:42:29.706604   47076 main.go:141] libmachine: Using API Version  1
	I0803 23:42:29.706633   47076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:42:29.706932   47076 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:42:29.707135   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:42:29.707272   47076 main.go:141] libmachine: (multinode-626202) Calling .GetState
	I0803 23:42:29.708855   47076 fix.go:112] recreateIfNeeded on multinode-626202: state=Running err=<nil>
	W0803 23:42:29.708872   47076 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 23:42:29.711115   47076 out.go:177] * Updating the running kvm2 "multinode-626202" VM ...
	I0803 23:42:29.712463   47076 machine.go:94] provisionDockerMachine start ...
	I0803 23:42:29.712484   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:42:29.712715   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:42:29.715173   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:29.715682   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:29.715709   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:29.715858   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:42:29.716034   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:29.716217   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:29.716368   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:42:29.716543   47076 main.go:141] libmachine: Using SSH client type: native
	I0803 23:42:29.716741   47076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0803 23:42:29.716752   47076 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 23:42:29.834840   47076 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-626202
	
	I0803 23:42:29.834864   47076 main.go:141] libmachine: (multinode-626202) Calling .GetMachineName
	I0803 23:42:29.835093   47076 buildroot.go:166] provisioning hostname "multinode-626202"
	I0803 23:42:29.835118   47076 main.go:141] libmachine: (multinode-626202) Calling .GetMachineName
	I0803 23:42:29.835290   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:42:29.837753   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:29.838130   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:29.838150   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:29.838286   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:42:29.838497   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:29.838677   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:29.838899   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:42:29.839083   47076 main.go:141] libmachine: Using SSH client type: native
	I0803 23:42:29.839267   47076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0803 23:42:29.839279   47076 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-626202 && echo "multinode-626202" | sudo tee /etc/hostname
	I0803 23:42:29.967206   47076 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-626202
	
	I0803 23:42:29.967233   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:42:29.970125   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:29.970529   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:29.970575   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:29.970689   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:42:29.970884   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:29.971079   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:29.971247   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:42:29.971439   47076 main.go:141] libmachine: Using SSH client type: native
	I0803 23:42:29.971634   47076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0803 23:42:29.971653   47076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-626202' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-626202/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-626202' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:42:30.083028   47076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:42:30.083062   47076 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 23:42:30.083080   47076 buildroot.go:174] setting up certificates
	I0803 23:42:30.083087   47076 provision.go:84] configureAuth start
	I0803 23:42:30.083096   47076 main.go:141] libmachine: (multinode-626202) Calling .GetMachineName
	I0803 23:42:30.083336   47076 main.go:141] libmachine: (multinode-626202) Calling .GetIP
	I0803 23:42:30.086051   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.086434   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:30.086455   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.086595   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:42:30.089335   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.089687   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:30.089719   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.089872   47076 provision.go:143] copyHostCerts
	I0803 23:42:30.089903   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:42:30.089943   47076 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0803 23:42:30.089954   47076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:42:30.090038   47076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 23:42:30.090171   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:42:30.090196   47076 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0803 23:42:30.090203   47076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:42:30.090245   47076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 23:42:30.090331   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:42:30.090355   47076 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0803 23:42:30.090362   47076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:42:30.090397   47076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 23:42:30.090481   47076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.multinode-626202 san=[127.0.0.1 192.168.39.176 localhost minikube multinode-626202]
	I0803 23:42:30.153747   47076 provision.go:177] copyRemoteCerts
	I0803 23:42:30.153825   47076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:42:30.153855   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:42:30.156805   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.157234   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:30.157263   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.157547   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:42:30.157760   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:30.157976   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:42:30.158157   47076 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/multinode-626202/id_rsa Username:docker}
	I0803 23:42:30.244389   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:42:30.244466   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 23:42:30.270538   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:42:30.270620   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0803 23:42:30.296437   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:42:30.296510   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:42:30.321268   47076 provision.go:87] duration metric: took 238.16797ms to configureAuth
	I0803 23:42:30.321297   47076 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:42:30.321544   47076 config.go:182] Loaded profile config "multinode-626202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:42:30.321631   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:42:30.324134   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.324496   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:30.324523   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.324714   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:42:30.324897   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:30.325078   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:30.325202   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:42:30.325335   47076 main.go:141] libmachine: Using SSH client type: native
	I0803 23:42:30.325546   47076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0803 23:42:30.325568   47076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:44:01.064406   47076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:44:01.064436   47076 machine.go:97] duration metric: took 1m31.351959949s to provisionDockerMachine
	I0803 23:44:01.064449   47076 start.go:293] postStartSetup for "multinode-626202" (driver="kvm2")
	I0803 23:44:01.064463   47076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:44:01.064506   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:44:01.064837   47076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:44:01.064872   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:44:01.067981   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.068367   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:44:01.068392   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.068513   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:44:01.068676   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:44:01.068822   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:44:01.068971   47076 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/multinode-626202/id_rsa Username:docker}
	I0803 23:44:01.158407   47076 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:44:01.162894   47076 command_runner.go:130] > NAME=Buildroot
	I0803 23:44:01.162915   47076 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0803 23:44:01.162920   47076 command_runner.go:130] > ID=buildroot
	I0803 23:44:01.162924   47076 command_runner.go:130] > VERSION_ID=2023.02.9
	I0803 23:44:01.162929   47076 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0803 23:44:01.163185   47076 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:44:01.163210   47076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 23:44:01.163269   47076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 23:44:01.163353   47076 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0803 23:44:01.163365   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0803 23:44:01.163459   47076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:44:01.174092   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:44:01.200238   47076 start.go:296] duration metric: took 135.774196ms for postStartSetup
	I0803 23:44:01.200286   47076 fix.go:56] duration metric: took 1m31.509849668s for fixHost
	I0803 23:44:01.200311   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:44:01.203027   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.203359   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:44:01.203378   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.203569   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:44:01.203828   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:44:01.204018   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:44:01.204156   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:44:01.204328   47076 main.go:141] libmachine: Using SSH client type: native
	I0803 23:44:01.204488   47076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0803 23:44:01.204498   47076 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:44:01.318137   47076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722728641.294100479
	
	I0803 23:44:01.318161   47076 fix.go:216] guest clock: 1722728641.294100479
	I0803 23:44:01.318169   47076 fix.go:229] Guest: 2024-08-03 23:44:01.294100479 +0000 UTC Remote: 2024-08-03 23:44:01.200292217 +0000 UTC m=+91.636762816 (delta=93.808262ms)
	I0803 23:44:01.318187   47076 fix.go:200] guest clock delta is within tolerance: 93.808262ms
	I0803 23:44:01.318192   47076 start.go:83] releasing machines lock for "multinode-626202", held for 1m31.627769129s
	I0803 23:44:01.318233   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:44:01.318484   47076 main.go:141] libmachine: (multinode-626202) Calling .GetIP
	I0803 23:44:01.321471   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.321859   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:44:01.321889   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.322039   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:44:01.322590   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:44:01.322754   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:44:01.322817   47076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:44:01.322858   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:44:01.323084   47076 ssh_runner.go:195] Run: cat /version.json
	I0803 23:44:01.323102   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:44:01.325750   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.325799   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.326219   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:44:01.326248   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.326280   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:44:01.326297   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.326445   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:44:01.326445   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:44:01.326667   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:44:01.326677   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:44:01.326806   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:44:01.326872   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:44:01.326980   47076 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/multinode-626202/id_rsa Username:docker}
	I0803 23:44:01.327044   47076 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/multinode-626202/id_rsa Username:docker}
	I0803 23:44:01.426700   47076 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0803 23:44:01.426756   47076 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0803 23:44:01.426913   47076 ssh_runner.go:195] Run: systemctl --version
	I0803 23:44:01.433218   47076 command_runner.go:130] > systemd 252 (252)
	I0803 23:44:01.433265   47076 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0803 23:44:01.433346   47076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:44:01.600612   47076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0803 23:44:01.609598   47076 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0803 23:44:01.609741   47076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:44:01.609795   47076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:44:01.620080   47076 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0803 23:44:01.620109   47076 start.go:495] detecting cgroup driver to use...
	I0803 23:44:01.620174   47076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:44:01.637674   47076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:44:01.654770   47076 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:44:01.654840   47076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:44:01.670835   47076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:44:01.685600   47076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:44:01.840423   47076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:44:01.976931   47076 docker.go:233] disabling docker service ...
	I0803 23:44:01.976998   47076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:44:01.994460   47076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:44:02.008469   47076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:44:02.145921   47076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:44:02.284815   47076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:44:02.299416   47076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:44:02.318434   47076 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0803 23:44:02.318475   47076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:44:02.318545   47076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.329377   47076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:44:02.329456   47076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.340535   47076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.351868   47076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.363222   47076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:44:02.374615   47076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.385467   47076 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.396736   47076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.407323   47076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:44:02.417651   47076 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0803 23:44:02.417737   47076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:44:02.427387   47076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:44:02.560402   47076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:44:04.009028   47076 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.448588877s)
	I0803 23:44:04.009064   47076 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:44:04.009115   47076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:44:04.014228   47076 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0803 23:44:04.014252   47076 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0803 23:44:04.014261   47076 command_runner.go:130] > Device: 0,22	Inode: 1341        Links: 1
	I0803 23:44:04.014272   47076 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0803 23:44:04.014279   47076 command_runner.go:130] > Access: 2024-08-03 23:44:03.869013992 +0000
	I0803 23:44:04.014290   47076 command_runner.go:130] > Modify: 2024-08-03 23:44:03.869013992 +0000
	I0803 23:44:04.014302   47076 command_runner.go:130] > Change: 2024-08-03 23:44:03.869013992 +0000
	I0803 23:44:04.014311   47076 command_runner.go:130] >  Birth: -
	I0803 23:44:04.014334   47076 start.go:563] Will wait 60s for crictl version
	I0803 23:44:04.014374   47076 ssh_runner.go:195] Run: which crictl
	I0803 23:44:04.018167   47076 command_runner.go:130] > /usr/bin/crictl
	I0803 23:44:04.018236   47076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:44:04.055070   47076 command_runner.go:130] > Version:  0.1.0
	I0803 23:44:04.055094   47076 command_runner.go:130] > RuntimeName:  cri-o
	I0803 23:44:04.055101   47076 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0803 23:44:04.055109   47076 command_runner.go:130] > RuntimeApiVersion:  v1
	I0803 23:44:04.056294   47076 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:44:04.056362   47076 ssh_runner.go:195] Run: crio --version
	I0803 23:44:04.085950   47076 command_runner.go:130] > crio version 1.29.1
	I0803 23:44:04.085970   47076 command_runner.go:130] > Version:        1.29.1
	I0803 23:44:04.085977   47076 command_runner.go:130] > GitCommit:      unknown
	I0803 23:44:04.085980   47076 command_runner.go:130] > GitCommitDate:  unknown
	I0803 23:44:04.085985   47076 command_runner.go:130] > GitTreeState:   clean
	I0803 23:44:04.085998   47076 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0803 23:44:04.086014   47076 command_runner.go:130] > GoVersion:      go1.21.6
	I0803 23:44:04.086021   47076 command_runner.go:130] > Compiler:       gc
	I0803 23:44:04.086028   47076 command_runner.go:130] > Platform:       linux/amd64
	I0803 23:44:04.086048   47076 command_runner.go:130] > Linkmode:       dynamic
	I0803 23:44:04.086057   47076 command_runner.go:130] > BuildTags:      
	I0803 23:44:04.086064   47076 command_runner.go:130] >   containers_image_ostree_stub
	I0803 23:44:04.086074   47076 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0803 23:44:04.086078   47076 command_runner.go:130] >   btrfs_noversion
	I0803 23:44:04.086083   47076 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0803 23:44:04.086087   47076 command_runner.go:130] >   libdm_no_deferred_remove
	I0803 23:44:04.086090   47076 command_runner.go:130] >   seccomp
	I0803 23:44:04.086095   47076 command_runner.go:130] > LDFlags:          unknown
	I0803 23:44:04.086099   47076 command_runner.go:130] > SeccompEnabled:   true
	I0803 23:44:04.086103   47076 command_runner.go:130] > AppArmorEnabled:  false
	I0803 23:44:04.086194   47076 ssh_runner.go:195] Run: crio --version
	I0803 23:44:04.116741   47076 command_runner.go:130] > crio version 1.29.1
	I0803 23:44:04.116764   47076 command_runner.go:130] > Version:        1.29.1
	I0803 23:44:04.116770   47076 command_runner.go:130] > GitCommit:      unknown
	I0803 23:44:04.116776   47076 command_runner.go:130] > GitCommitDate:  unknown
	I0803 23:44:04.116782   47076 command_runner.go:130] > GitTreeState:   clean
	I0803 23:44:04.116790   47076 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0803 23:44:04.116796   47076 command_runner.go:130] > GoVersion:      go1.21.6
	I0803 23:44:04.116802   47076 command_runner.go:130] > Compiler:       gc
	I0803 23:44:04.116808   47076 command_runner.go:130] > Platform:       linux/amd64
	I0803 23:44:04.116813   47076 command_runner.go:130] > Linkmode:       dynamic
	I0803 23:44:04.116819   47076 command_runner.go:130] > BuildTags:      
	I0803 23:44:04.116849   47076 command_runner.go:130] >   containers_image_ostree_stub
	I0803 23:44:04.116858   47076 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0803 23:44:04.116865   47076 command_runner.go:130] >   btrfs_noversion
	I0803 23:44:04.116872   47076 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0803 23:44:04.116882   47076 command_runner.go:130] >   libdm_no_deferred_remove
	I0803 23:44:04.116888   47076 command_runner.go:130] >   seccomp
	I0803 23:44:04.116898   47076 command_runner.go:130] > LDFlags:          unknown
	I0803 23:44:04.116905   47076 command_runner.go:130] > SeccompEnabled:   true
	I0803 23:44:04.116922   47076 command_runner.go:130] > AppArmorEnabled:  false
	I0803 23:44:04.119755   47076 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:44:04.120987   47076 main.go:141] libmachine: (multinode-626202) Calling .GetIP
	I0803 23:44:04.123667   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:04.123996   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:44:04.124026   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:04.124192   47076 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:44:04.128502   47076 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0803 23:44:04.128778   47076 kubeadm.go:883] updating cluster {Name:multinode-626202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-626202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:44:04.129036   47076 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:44:04.129097   47076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:44:04.175426   47076 command_runner.go:130] > {
	I0803 23:44:04.175453   47076 command_runner.go:130] >   "images": [
	I0803 23:44:04.175458   47076 command_runner.go:130] >     {
	I0803 23:44:04.175466   47076 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0803 23:44:04.175470   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.175477   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0803 23:44:04.175480   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175484   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.175492   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0803 23:44:04.175499   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0803 23:44:04.175502   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175508   47076 command_runner.go:130] >       "size": "87165492",
	I0803 23:44:04.175515   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.175522   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.175536   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.175546   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.175551   47076 command_runner.go:130] >     },
	I0803 23:44:04.175555   47076 command_runner.go:130] >     {
	I0803 23:44:04.175561   47076 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0803 23:44:04.175565   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.175570   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0803 23:44:04.175574   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175578   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.175587   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0803 23:44:04.175597   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0803 23:44:04.175605   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175612   47076 command_runner.go:130] >       "size": "87174707",
	I0803 23:44:04.175621   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.175632   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.175641   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.175648   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.175656   47076 command_runner.go:130] >     },
	I0803 23:44:04.175660   47076 command_runner.go:130] >     {
	I0803 23:44:04.175675   47076 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0803 23:44:04.175683   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.175694   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0803 23:44:04.175704   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175714   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.175725   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0803 23:44:04.175735   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0803 23:44:04.175741   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175748   47076 command_runner.go:130] >       "size": "1363676",
	I0803 23:44:04.175753   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.175761   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.175766   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.175775   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.175781   47076 command_runner.go:130] >     },
	I0803 23:44:04.175790   47076 command_runner.go:130] >     {
	I0803 23:44:04.175802   47076 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0803 23:44:04.175809   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.175820   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0803 23:44:04.175829   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175837   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.175848   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0803 23:44:04.175875   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0803 23:44:04.175884   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175891   47076 command_runner.go:130] >       "size": "31470524",
	I0803 23:44:04.175898   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.175904   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.175913   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.175922   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.175930   47076 command_runner.go:130] >     },
	I0803 23:44:04.175935   47076 command_runner.go:130] >     {
	I0803 23:44:04.175946   47076 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0803 23:44:04.175955   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.175967   47076 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0803 23:44:04.175976   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175983   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.175998   47076 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0803 23:44:04.176018   47076 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0803 23:44:04.176024   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176029   47076 command_runner.go:130] >       "size": "61245718",
	I0803 23:44:04.176036   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.176046   47076 command_runner.go:130] >       "username": "nonroot",
	I0803 23:44:04.176052   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176061   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.176066   47076 command_runner.go:130] >     },
	I0803 23:44:04.176075   47076 command_runner.go:130] >     {
	I0803 23:44:04.176084   47076 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0803 23:44:04.176093   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.176101   47076 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0803 23:44:04.176107   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176111   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.176130   47076 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0803 23:44:04.176144   47076 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0803 23:44:04.176155   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176162   47076 command_runner.go:130] >       "size": "150779692",
	I0803 23:44:04.176171   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.176181   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.176189   47076 command_runner.go:130] >       },
	I0803 23:44:04.176196   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.176201   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176210   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.176216   47076 command_runner.go:130] >     },
	I0803 23:44:04.176225   47076 command_runner.go:130] >     {
	I0803 23:44:04.176235   47076 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0803 23:44:04.176243   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.176252   47076 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0803 23:44:04.176260   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176267   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.176278   47076 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0803 23:44:04.176288   47076 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0803 23:44:04.176297   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176304   47076 command_runner.go:130] >       "size": "117609954",
	I0803 23:44:04.176313   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.176326   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.176334   47076 command_runner.go:130] >       },
	I0803 23:44:04.176341   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.176350   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176359   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.176365   47076 command_runner.go:130] >     },
	I0803 23:44:04.176368   47076 command_runner.go:130] >     {
	I0803 23:44:04.176380   47076 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0803 23:44:04.176390   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.176402   47076 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0803 23:44:04.176410   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176420   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.176448   47076 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0803 23:44:04.176460   47076 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0803 23:44:04.176468   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176479   47076 command_runner.go:130] >       "size": "112198984",
	I0803 23:44:04.176488   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.176495   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.176503   47076 command_runner.go:130] >       },
	I0803 23:44:04.176510   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.176516   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176522   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.176527   47076 command_runner.go:130] >     },
	I0803 23:44:04.176538   47076 command_runner.go:130] >     {
	I0803 23:44:04.176547   47076 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0803 23:44:04.176553   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.176559   47076 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0803 23:44:04.176564   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176570   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.176583   47076 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0803 23:44:04.176593   47076 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0803 23:44:04.176598   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176613   47076 command_runner.go:130] >       "size": "85953945",
	I0803 23:44:04.176622   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.176628   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.176637   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176649   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.176657   47076 command_runner.go:130] >     },
	I0803 23:44:04.176662   47076 command_runner.go:130] >     {
	I0803 23:44:04.176671   47076 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0803 23:44:04.176681   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.176689   47076 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0803 23:44:04.176697   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176703   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.176717   47076 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0803 23:44:04.176731   47076 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0803 23:44:04.176737   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176742   47076 command_runner.go:130] >       "size": "63051080",
	I0803 23:44:04.176746   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.176750   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.176755   47076 command_runner.go:130] >       },
	I0803 23:44:04.176759   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.176763   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176768   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.176772   47076 command_runner.go:130] >     },
	I0803 23:44:04.176775   47076 command_runner.go:130] >     {
	I0803 23:44:04.176781   47076 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0803 23:44:04.176786   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.176790   47076 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0803 23:44:04.176793   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176798   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.176804   47076 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0803 23:44:04.176810   47076 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0803 23:44:04.176814   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176817   47076 command_runner.go:130] >       "size": "750414",
	I0803 23:44:04.176821   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.176825   47076 command_runner.go:130] >         "value": "65535"
	I0803 23:44:04.176829   47076 command_runner.go:130] >       },
	I0803 23:44:04.176833   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.176838   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176842   47076 command_runner.go:130] >       "pinned": true
	I0803 23:44:04.176845   47076 command_runner.go:130] >     }
	I0803 23:44:04.176853   47076 command_runner.go:130] >   ]
	I0803 23:44:04.176859   47076 command_runner.go:130] > }
	I0803 23:44:04.177062   47076 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:44:04.177077   47076 crio.go:433] Images already preloaded, skipping extraction
	I0803 23:44:04.177126   47076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:44:04.211002   47076 command_runner.go:130] > {
	I0803 23:44:04.211030   47076 command_runner.go:130] >   "images": [
	I0803 23:44:04.211036   47076 command_runner.go:130] >     {
	I0803 23:44:04.211049   47076 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0803 23:44:04.211056   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211067   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0803 23:44:04.211073   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211080   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211093   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0803 23:44:04.211107   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0803 23:44:04.211115   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211120   47076 command_runner.go:130] >       "size": "87165492",
	I0803 23:44:04.211126   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.211130   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211137   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211148   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211152   47076 command_runner.go:130] >     },
	I0803 23:44:04.211157   47076 command_runner.go:130] >     {
	I0803 23:44:04.211163   47076 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0803 23:44:04.211168   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211174   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0803 23:44:04.211180   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211184   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211193   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0803 23:44:04.211200   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0803 23:44:04.211205   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211209   47076 command_runner.go:130] >       "size": "87174707",
	I0803 23:44:04.211213   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.211227   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211233   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211242   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211248   47076 command_runner.go:130] >     },
	I0803 23:44:04.211252   47076 command_runner.go:130] >     {
	I0803 23:44:04.211260   47076 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0803 23:44:04.211267   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211273   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0803 23:44:04.211279   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211282   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211289   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0803 23:44:04.211298   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0803 23:44:04.211301   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211305   47076 command_runner.go:130] >       "size": "1363676",
	I0803 23:44:04.211309   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.211313   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211319   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211325   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211329   47076 command_runner.go:130] >     },
	I0803 23:44:04.211334   47076 command_runner.go:130] >     {
	I0803 23:44:04.211340   47076 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0803 23:44:04.211346   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211351   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0803 23:44:04.211357   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211361   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211370   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0803 23:44:04.211386   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0803 23:44:04.211392   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211396   47076 command_runner.go:130] >       "size": "31470524",
	I0803 23:44:04.211400   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.211404   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211410   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211413   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211419   47076 command_runner.go:130] >     },
	I0803 23:44:04.211423   47076 command_runner.go:130] >     {
	I0803 23:44:04.211429   47076 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0803 23:44:04.211434   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211439   47076 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0803 23:44:04.211450   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211460   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211471   47076 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0803 23:44:04.211487   47076 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0803 23:44:04.211496   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211501   47076 command_runner.go:130] >       "size": "61245718",
	I0803 23:44:04.211504   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.211508   47076 command_runner.go:130] >       "username": "nonroot",
	I0803 23:44:04.211512   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211516   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211519   47076 command_runner.go:130] >     },
	I0803 23:44:04.211522   47076 command_runner.go:130] >     {
	I0803 23:44:04.211528   47076 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0803 23:44:04.211535   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211540   47076 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0803 23:44:04.211545   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211549   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211556   47076 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0803 23:44:04.211563   47076 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0803 23:44:04.211566   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211571   47076 command_runner.go:130] >       "size": "150779692",
	I0803 23:44:04.211576   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.211580   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.211586   47076 command_runner.go:130] >       },
	I0803 23:44:04.211592   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211596   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211600   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211603   47076 command_runner.go:130] >     },
	I0803 23:44:04.211607   47076 command_runner.go:130] >     {
	I0803 23:44:04.211613   47076 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0803 23:44:04.211619   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211624   47076 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0803 23:44:04.211629   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211633   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211641   47076 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0803 23:44:04.211650   47076 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0803 23:44:04.211660   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211667   47076 command_runner.go:130] >       "size": "117609954",
	I0803 23:44:04.211670   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.211676   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.211679   47076 command_runner.go:130] >       },
	I0803 23:44:04.211685   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211689   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211694   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211697   47076 command_runner.go:130] >     },
	I0803 23:44:04.211700   47076 command_runner.go:130] >     {
	I0803 23:44:04.211706   47076 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0803 23:44:04.211712   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211717   47076 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0803 23:44:04.211722   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211726   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211745   47076 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0803 23:44:04.211754   47076 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0803 23:44:04.211758   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211762   47076 command_runner.go:130] >       "size": "112198984",
	I0803 23:44:04.211770   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.211776   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.211784   47076 command_runner.go:130] >       },
	I0803 23:44:04.211790   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211799   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211805   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211812   47076 command_runner.go:130] >     },
	I0803 23:44:04.211817   47076 command_runner.go:130] >     {
	I0803 23:44:04.211830   47076 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0803 23:44:04.211836   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211844   47076 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0803 23:44:04.211848   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211853   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211863   47076 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0803 23:44:04.211881   47076 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0803 23:44:04.211890   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211897   47076 command_runner.go:130] >       "size": "85953945",
	I0803 23:44:04.211913   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.211923   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211929   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211935   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211940   47076 command_runner.go:130] >     },
	I0803 23:44:04.211946   47076 command_runner.go:130] >     {
	I0803 23:44:04.211955   47076 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0803 23:44:04.211964   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211971   47076 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0803 23:44:04.211979   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211984   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211998   47076 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0803 23:44:04.212012   47076 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0803 23:44:04.212020   47076 command_runner.go:130] >       ],
	I0803 23:44:04.212026   47076 command_runner.go:130] >       "size": "63051080",
	I0803 23:44:04.212034   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.212039   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.212047   47076 command_runner.go:130] >       },
	I0803 23:44:04.212053   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.212062   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.212068   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.212076   47076 command_runner.go:130] >     },
	I0803 23:44:04.212081   47076 command_runner.go:130] >     {
	I0803 23:44:04.212093   47076 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0803 23:44:04.212099   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.212107   47076 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0803 23:44:04.212111   47076 command_runner.go:130] >       ],
	I0803 23:44:04.212119   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.212130   47076 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0803 23:44:04.212150   47076 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0803 23:44:04.212158   47076 command_runner.go:130] >       ],
	I0803 23:44:04.212165   47076 command_runner.go:130] >       "size": "750414",
	I0803 23:44:04.212173   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.212190   47076 command_runner.go:130] >         "value": "65535"
	I0803 23:44:04.212196   47076 command_runner.go:130] >       },
	I0803 23:44:04.212200   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.212213   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.212219   47076 command_runner.go:130] >       "pinned": true
	I0803 23:44:04.212222   47076 command_runner.go:130] >     }
	I0803 23:44:04.212225   47076 command_runner.go:130] >   ]
	I0803 23:44:04.212229   47076 command_runner.go:130] > }
	I0803 23:44:04.212447   47076 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:44:04.212471   47076 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:44:04.212480   47076 kubeadm.go:934] updating node { 192.168.39.176 8443 v1.30.3 crio true true} ...
	I0803 23:44:04.212688   47076 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-626202 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-626202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:44:04.212800   47076 ssh_runner.go:195] Run: crio config
	I0803 23:44:04.259338   47076 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0803 23:44:04.259372   47076 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0803 23:44:04.259384   47076 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0803 23:44:04.259389   47076 command_runner.go:130] > #
	I0803 23:44:04.259400   47076 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0803 23:44:04.259409   47076 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0803 23:44:04.259419   47076 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0803 23:44:04.259444   47076 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0803 23:44:04.259453   47076 command_runner.go:130] > # reload'.
	I0803 23:44:04.259461   47076 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0803 23:44:04.259468   47076 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0803 23:44:04.259476   47076 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0803 23:44:04.259484   47076 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0803 23:44:04.259492   47076 command_runner.go:130] > [crio]
	I0803 23:44:04.259502   47076 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0803 23:44:04.259512   47076 command_runner.go:130] > # containers images, in this directory.
	I0803 23:44:04.259546   47076 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0803 23:44:04.259585   47076 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0803 23:44:04.259599   47076 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0803 23:44:04.259615   47076 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0803 23:44:04.259853   47076 command_runner.go:130] > # imagestore = ""
	I0803 23:44:04.259868   47076 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0803 23:44:04.259877   47076 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0803 23:44:04.260062   47076 command_runner.go:130] > storage_driver = "overlay"
	I0803 23:44:04.260090   47076 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0803 23:44:04.260100   47076 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0803 23:44:04.260109   47076 command_runner.go:130] > storage_option = [
	I0803 23:44:04.260196   47076 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0803 23:44:04.260234   47076 command_runner.go:130] > ]
	I0803 23:44:04.260250   47076 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0803 23:44:04.260268   47076 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0803 23:44:04.260551   47076 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0803 23:44:04.260565   47076 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0803 23:44:04.260574   47076 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0803 23:44:04.260582   47076 command_runner.go:130] > # always happen on a node reboot
	I0803 23:44:04.260901   47076 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0803 23:44:04.260932   47076 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0803 23:44:04.260946   47076 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0803 23:44:04.260954   47076 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0803 23:44:04.261153   47076 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0803 23:44:04.261173   47076 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0803 23:44:04.261185   47076 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0803 23:44:04.261387   47076 command_runner.go:130] > # internal_wipe = true
	I0803 23:44:04.261410   47076 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0803 23:44:04.261419   47076 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0803 23:44:04.261678   47076 command_runner.go:130] > # internal_repair = false
	I0803 23:44:04.261690   47076 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0803 23:44:04.261700   47076 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0803 23:44:04.261710   47076 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0803 23:44:04.262169   47076 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0803 23:44:04.262182   47076 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0803 23:44:04.262188   47076 command_runner.go:130] > [crio.api]
	I0803 23:44:04.262195   47076 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0803 23:44:04.262460   47076 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0803 23:44:04.262473   47076 command_runner.go:130] > # IP address on which the stream server will listen.
	I0803 23:44:04.262736   47076 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0803 23:44:04.262749   47076 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0803 23:44:04.262758   47076 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0803 23:44:04.262976   47076 command_runner.go:130] > # stream_port = "0"
	I0803 23:44:04.262988   47076 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0803 23:44:04.263290   47076 command_runner.go:130] > # stream_enable_tls = false
	I0803 23:44:04.263302   47076 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0803 23:44:04.263310   47076 command_runner.go:130] > # stream_idle_timeout = ""
	I0803 23:44:04.263321   47076 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0803 23:44:04.263334   47076 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0803 23:44:04.263340   47076 command_runner.go:130] > # minutes.
	I0803 23:44:04.263352   47076 command_runner.go:130] > # stream_tls_cert = ""
	I0803 23:44:04.263369   47076 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0803 23:44:04.263382   47076 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0803 23:44:04.263474   47076 command_runner.go:130] > # stream_tls_key = ""
	I0803 23:44:04.263494   47076 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0803 23:44:04.263505   47076 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0803 23:44:04.263538   47076 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0803 23:44:04.263547   47076 command_runner.go:130] > # stream_tls_ca = ""
	I0803 23:44:04.263560   47076 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0803 23:44:04.263571   47076 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0803 23:44:04.263585   47076 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0803 23:44:04.263596   47076 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0803 23:44:04.263612   47076 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0803 23:44:04.263629   47076 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0803 23:44:04.263642   47076 command_runner.go:130] > [crio.runtime]
	I0803 23:44:04.263654   47076 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0803 23:44:04.263666   47076 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0803 23:44:04.263675   47076 command_runner.go:130] > # "nofile=1024:2048"
	I0803 23:44:04.263685   47076 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0803 23:44:04.263693   47076 command_runner.go:130] > # default_ulimits = [
	I0803 23:44:04.263699   47076 command_runner.go:130] > # ]
	I0803 23:44:04.263709   47076 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0803 23:44:04.263718   47076 command_runner.go:130] > # no_pivot = false
	I0803 23:44:04.263726   47076 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0803 23:44:04.263739   47076 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0803 23:44:04.263751   47076 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0803 23:44:04.263763   47076 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0803 23:44:04.263771   47076 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0803 23:44:04.263783   47076 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0803 23:44:04.263794   47076 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0803 23:44:04.263801   47076 command_runner.go:130] > # Cgroup setting for conmon
	I0803 23:44:04.263812   47076 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0803 23:44:04.263821   47076 command_runner.go:130] > conmon_cgroup = "pod"
	I0803 23:44:04.263831   47076 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0803 23:44:04.263842   47076 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0803 23:44:04.263856   47076 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0803 23:44:04.263865   47076 command_runner.go:130] > conmon_env = [
	I0803 23:44:04.263873   47076 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0803 23:44:04.263885   47076 command_runner.go:130] > ]
	I0803 23:44:04.263895   47076 command_runner.go:130] > # Additional environment variables to set for all the
	I0803 23:44:04.263904   47076 command_runner.go:130] > # containers. These are overridden if set in the
	I0803 23:44:04.263920   47076 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0803 23:44:04.263929   47076 command_runner.go:130] > # default_env = [
	I0803 23:44:04.263934   47076 command_runner.go:130] > # ]
	I0803 23:44:04.263943   47076 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0803 23:44:04.263957   47076 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0803 23:44:04.263967   47076 command_runner.go:130] > # selinux = false
	I0803 23:44:04.263976   47076 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0803 23:44:04.263988   47076 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0803 23:44:04.263996   47076 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0803 23:44:04.264010   47076 command_runner.go:130] > # seccomp_profile = ""
	I0803 23:44:04.264025   47076 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0803 23:44:04.264042   47076 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0803 23:44:04.264054   47076 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0803 23:44:04.264063   47076 command_runner.go:130] > # which might increase security.
	I0803 23:44:04.264071   47076 command_runner.go:130] > # This option is currently deprecated,
	I0803 23:44:04.264083   47076 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0803 23:44:04.264090   47076 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0803 23:44:04.264103   47076 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0803 23:44:04.264115   47076 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0803 23:44:04.264128   47076 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0803 23:44:04.264139   47076 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0803 23:44:04.264150   47076 command_runner.go:130] > # This option supports live configuration reload.
	I0803 23:44:04.264159   47076 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0803 23:44:04.264170   47076 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0803 23:44:04.264177   47076 command_runner.go:130] > # the cgroup blockio controller.
	I0803 23:44:04.264184   47076 command_runner.go:130] > # blockio_config_file = ""
	I0803 23:44:04.264198   47076 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0803 23:44:04.264205   47076 command_runner.go:130] > # blockio parameters.
	I0803 23:44:04.264211   47076 command_runner.go:130] > # blockio_reload = false
	I0803 23:44:04.264221   47076 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0803 23:44:04.264234   47076 command_runner.go:130] > # irqbalance daemon.
	I0803 23:44:04.264244   47076 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0803 23:44:04.264257   47076 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0803 23:44:04.264271   47076 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0803 23:44:04.264281   47076 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0803 23:44:04.264297   47076 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0803 23:44:04.264313   47076 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0803 23:44:04.264321   47076 command_runner.go:130] > # This option supports live configuration reload.
	I0803 23:44:04.264333   47076 command_runner.go:130] > # rdt_config_file = ""
	I0803 23:44:04.264345   47076 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0803 23:44:04.264355   47076 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0803 23:44:04.264404   47076 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0803 23:44:04.264422   47076 command_runner.go:130] > # separate_pull_cgroup = ""
	I0803 23:44:04.264432   47076 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0803 23:44:04.264442   47076 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0803 23:44:04.264448   47076 command_runner.go:130] > # will be added.
	I0803 23:44:04.264455   47076 command_runner.go:130] > # default_capabilities = [
	I0803 23:44:04.264461   47076 command_runner.go:130] > # 	"CHOWN",
	I0803 23:44:04.264469   47076 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0803 23:44:04.264475   47076 command_runner.go:130] > # 	"FSETID",
	I0803 23:44:04.264480   47076 command_runner.go:130] > # 	"FOWNER",
	I0803 23:44:04.264488   47076 command_runner.go:130] > # 	"SETGID",
	I0803 23:44:04.264494   47076 command_runner.go:130] > # 	"SETUID",
	I0803 23:44:04.264499   47076 command_runner.go:130] > # 	"SETPCAP",
	I0803 23:44:04.264509   47076 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0803 23:44:04.264514   47076 command_runner.go:130] > # 	"KILL",
	I0803 23:44:04.264522   47076 command_runner.go:130] > # ]
	I0803 23:44:04.264534   47076 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0803 23:44:04.264547   47076 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0803 23:44:04.264555   47076 command_runner.go:130] > # add_inheritable_capabilities = false
	I0803 23:44:04.264567   47076 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0803 23:44:04.264579   47076 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0803 23:44:04.264587   47076 command_runner.go:130] > default_sysctls = [
	I0803 23:44:04.264597   47076 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0803 23:44:04.264605   47076 command_runner.go:130] > ]
	I0803 23:44:04.264612   47076 command_runner.go:130] > # List of devices on the host that a
	I0803 23:44:04.264625   47076 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0803 23:44:04.264633   47076 command_runner.go:130] > # allowed_devices = [
	I0803 23:44:04.264639   47076 command_runner.go:130] > # 	"/dev/fuse",
	I0803 23:44:04.264657   47076 command_runner.go:130] > # ]
	I0803 23:44:04.264664   47076 command_runner.go:130] > # List of additional devices. specified as
	I0803 23:44:04.264680   47076 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0803 23:44:04.264688   47076 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0803 23:44:04.264700   47076 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0803 23:44:04.264707   47076 command_runner.go:130] > # additional_devices = [
	I0803 23:44:04.264713   47076 command_runner.go:130] > # ]
	I0803 23:44:04.264721   47076 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0803 23:44:04.264732   47076 command_runner.go:130] > # cdi_spec_dirs = [
	I0803 23:44:04.264741   47076 command_runner.go:130] > # 	"/etc/cdi",
	I0803 23:44:04.264748   47076 command_runner.go:130] > # 	"/var/run/cdi",
	I0803 23:44:04.264755   47076 command_runner.go:130] > # ]
	I0803 23:44:04.264773   47076 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0803 23:44:04.264785   47076 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0803 23:44:04.264795   47076 command_runner.go:130] > # Defaults to false.
	I0803 23:44:04.264802   47076 command_runner.go:130] > # device_ownership_from_security_context = false
	I0803 23:44:04.264814   47076 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0803 23:44:04.264823   47076 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0803 23:44:04.264832   47076 command_runner.go:130] > # hooks_dir = [
	I0803 23:44:04.264842   47076 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0803 23:44:04.264851   47076 command_runner.go:130] > # ]
	I0803 23:44:04.264860   47076 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0803 23:44:04.264873   47076 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0803 23:44:04.264884   47076 command_runner.go:130] > # its default mounts from the following two files:
	I0803 23:44:04.264892   47076 command_runner.go:130] > #
	I0803 23:44:04.264903   47076 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0803 23:44:04.264916   47076 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0803 23:44:04.264928   47076 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0803 23:44:04.264935   47076 command_runner.go:130] > #
	I0803 23:44:04.264944   47076 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0803 23:44:04.264961   47076 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0803 23:44:04.264973   47076 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0803 23:44:04.264984   47076 command_runner.go:130] > #      only add mounts it finds in this file.
	I0803 23:44:04.264989   47076 command_runner.go:130] > #
	I0803 23:44:04.264996   47076 command_runner.go:130] > # default_mounts_file = ""
	I0803 23:44:04.265007   47076 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0803 23:44:04.265016   47076 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0803 23:44:04.265025   47076 command_runner.go:130] > pids_limit = 1024
	I0803 23:44:04.265039   47076 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0803 23:44:04.265051   47076 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0803 23:44:04.265071   47076 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0803 23:44:04.265088   47076 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0803 23:44:04.265103   47076 command_runner.go:130] > # log_size_max = -1
	I0803 23:44:04.265113   47076 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0803 23:44:04.265122   47076 command_runner.go:130] > # log_to_journald = false
	I0803 23:44:04.265136   47076 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0803 23:44:04.265146   47076 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0803 23:44:04.265160   47076 command_runner.go:130] > # Path to directory for container attach sockets.
	I0803 23:44:04.265176   47076 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0803 23:44:04.265188   47076 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0803 23:44:04.265196   47076 command_runner.go:130] > # bind_mount_prefix = ""
	I0803 23:44:04.265202   47076 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0803 23:44:04.265210   47076 command_runner.go:130] > # read_only = false
	I0803 23:44:04.265218   47076 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0803 23:44:04.265231   47076 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0803 23:44:04.265240   47076 command_runner.go:130] > # live configuration reload.
	I0803 23:44:04.265246   47076 command_runner.go:130] > # log_level = "info"
	I0803 23:44:04.265257   47076 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0803 23:44:04.265269   47076 command_runner.go:130] > # This option supports live configuration reload.
	I0803 23:44:04.265277   47076 command_runner.go:130] > # log_filter = ""
	I0803 23:44:04.265286   47076 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0803 23:44:04.265298   47076 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0803 23:44:04.265308   47076 command_runner.go:130] > # separated by comma.
	I0803 23:44:04.265319   47076 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0803 23:44:04.265328   47076 command_runner.go:130] > # uid_mappings = ""
	I0803 23:44:04.265338   47076 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0803 23:44:04.265362   47076 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0803 23:44:04.265372   47076 command_runner.go:130] > # separated by comma.
	I0803 23:44:04.265383   47076 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0803 23:44:04.265393   47076 command_runner.go:130] > # gid_mappings = ""
	I0803 23:44:04.265402   47076 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0803 23:44:04.265414   47076 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0803 23:44:04.265427   47076 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0803 23:44:04.265441   47076 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0803 23:44:04.265451   47076 command_runner.go:130] > # minimum_mappable_uid = -1
	I0803 23:44:04.265463   47076 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0803 23:44:04.265478   47076 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0803 23:44:04.265490   47076 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0803 23:44:04.265505   47076 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0803 23:44:04.265514   47076 command_runner.go:130] > # minimum_mappable_gid = -1
	I0803 23:44:04.265524   47076 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0803 23:44:04.265534   47076 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0803 23:44:04.265544   47076 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0803 23:44:04.265553   47076 command_runner.go:130] > # ctr_stop_timeout = 30
	I0803 23:44:04.265571   47076 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0803 23:44:04.265583   47076 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0803 23:44:04.265594   47076 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0803 23:44:04.265603   47076 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0803 23:44:04.265613   47076 command_runner.go:130] > drop_infra_ctr = false
	I0803 23:44:04.265622   47076 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0803 23:44:04.265633   47076 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0803 23:44:04.265643   47076 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0803 23:44:04.265652   47076 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0803 23:44:04.265664   47076 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0803 23:44:04.265675   47076 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0803 23:44:04.265686   47076 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0803 23:44:04.265713   47076 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0803 23:44:04.265722   47076 command_runner.go:130] > # shared_cpuset = ""
	I0803 23:44:04.265731   47076 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0803 23:44:04.265741   47076 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0803 23:44:04.265751   47076 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0803 23:44:04.265762   47076 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0803 23:44:04.265771   47076 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0803 23:44:04.265779   47076 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0803 23:44:04.265792   47076 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0803 23:44:04.265800   47076 command_runner.go:130] > # enable_criu_support = false
	I0803 23:44:04.265810   47076 command_runner.go:130] > # Enable/disable the generation of the container,
	I0803 23:44:04.265821   47076 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0803 23:44:04.265826   47076 command_runner.go:130] > # enable_pod_events = false
	I0803 23:44:04.265838   47076 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0803 23:44:04.265850   47076 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0803 23:44:04.265860   47076 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0803 23:44:04.265872   47076 command_runner.go:130] > # default_runtime = "runc"
	I0803 23:44:04.265882   47076 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0803 23:44:04.265896   47076 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0803 23:44:04.265910   47076 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0803 23:44:04.265920   47076 command_runner.go:130] > # creation as a file is not desired either.
	I0803 23:44:04.265933   47076 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0803 23:44:04.265948   47076 command_runner.go:130] > # the hostname is being managed dynamically.
	I0803 23:44:04.265957   47076 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0803 23:44:04.265966   47076 command_runner.go:130] > # ]
	I0803 23:44:04.265975   47076 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0803 23:44:04.265984   47076 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0803 23:44:04.265995   47076 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0803 23:44:04.266006   47076 command_runner.go:130] > # Each entry in the table should follow the format:
	I0803 23:44:04.266010   47076 command_runner.go:130] > #
	I0803 23:44:04.266017   47076 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0803 23:44:04.266028   47076 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0803 23:44:04.266111   47076 command_runner.go:130] > # runtime_type = "oci"
	I0803 23:44:04.266121   47076 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0803 23:44:04.266125   47076 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0803 23:44:04.266129   47076 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0803 23:44:04.266133   47076 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0803 23:44:04.266139   47076 command_runner.go:130] > # monitor_env = []
	I0803 23:44:04.266144   47076 command_runner.go:130] > # privileged_without_host_devices = false
	I0803 23:44:04.266149   47076 command_runner.go:130] > # allowed_annotations = []
	I0803 23:44:04.266154   47076 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0803 23:44:04.266159   47076 command_runner.go:130] > # Where:
	I0803 23:44:04.266164   47076 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0803 23:44:04.266169   47076 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0803 23:44:04.266177   47076 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0803 23:44:04.266183   47076 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0803 23:44:04.266189   47076 command_runner.go:130] > #   in $PATH.
	I0803 23:44:04.266194   47076 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0803 23:44:04.266199   47076 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0803 23:44:04.266205   47076 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0803 23:44:04.266211   47076 command_runner.go:130] > #   state.
	I0803 23:44:04.266217   47076 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0803 23:44:04.266225   47076 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0803 23:44:04.266233   47076 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0803 23:44:04.266238   47076 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0803 23:44:04.266246   47076 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0803 23:44:04.266252   47076 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0803 23:44:04.266259   47076 command_runner.go:130] > #   The currently recognized values are:
	I0803 23:44:04.266265   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0803 23:44:04.266273   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0803 23:44:04.266286   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0803 23:44:04.266294   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0803 23:44:04.266301   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0803 23:44:04.266309   47076 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0803 23:44:04.266315   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0803 23:44:04.266324   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0803 23:44:04.266330   47076 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0803 23:44:04.266337   47076 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0803 23:44:04.266342   47076 command_runner.go:130] > #   deprecated option "conmon".
	I0803 23:44:04.266350   47076 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0803 23:44:04.266355   47076 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0803 23:44:04.266364   47076 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0803 23:44:04.266368   47076 command_runner.go:130] > #   should be moved to the container's cgroup
	I0803 23:44:04.266374   47076 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0803 23:44:04.266381   47076 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0803 23:44:04.266387   47076 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0803 23:44:04.266394   47076 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0803 23:44:04.266397   47076 command_runner.go:130] > #
	I0803 23:44:04.266404   47076 command_runner.go:130] > # Using the seccomp notifier feature:
	I0803 23:44:04.266407   47076 command_runner.go:130] > #
	I0803 23:44:04.266412   47076 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0803 23:44:04.266422   47076 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0803 23:44:04.266426   47076 command_runner.go:130] > #
	I0803 23:44:04.266431   47076 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0803 23:44:04.266441   47076 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0803 23:44:04.266444   47076 command_runner.go:130] > #
	I0803 23:44:04.266450   47076 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0803 23:44:04.266455   47076 command_runner.go:130] > # feature.
	I0803 23:44:04.266458   47076 command_runner.go:130] > #
	I0803 23:44:04.266463   47076 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0803 23:44:04.266471   47076 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0803 23:44:04.266477   47076 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0803 23:44:04.266484   47076 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0803 23:44:04.266490   47076 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0803 23:44:04.266493   47076 command_runner.go:130] > #
	I0803 23:44:04.266499   47076 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0803 23:44:04.266514   47076 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0803 23:44:04.266519   47076 command_runner.go:130] > #
	I0803 23:44:04.266524   47076 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0803 23:44:04.266532   47076 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0803 23:44:04.266535   47076 command_runner.go:130] > #
	I0803 23:44:04.266543   47076 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0803 23:44:04.266551   47076 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0803 23:44:04.266557   47076 command_runner.go:130] > # limitation.
	I0803 23:44:04.266561   47076 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0803 23:44:04.266565   47076 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0803 23:44:04.266569   47076 command_runner.go:130] > runtime_type = "oci"
	I0803 23:44:04.266575   47076 command_runner.go:130] > runtime_root = "/run/runc"
	I0803 23:44:04.266579   47076 command_runner.go:130] > runtime_config_path = ""
	I0803 23:44:04.266583   47076 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0803 23:44:04.266589   47076 command_runner.go:130] > monitor_cgroup = "pod"
	I0803 23:44:04.266593   47076 command_runner.go:130] > monitor_exec_cgroup = ""
	I0803 23:44:04.266597   47076 command_runner.go:130] > monitor_env = [
	I0803 23:44:04.266604   47076 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0803 23:44:04.266607   47076 command_runner.go:130] > ]
	I0803 23:44:04.266611   47076 command_runner.go:130] > privileged_without_host_devices = false
	I0803 23:44:04.266619   47076 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0803 23:44:04.266624   47076 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0803 23:44:04.266632   47076 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0803 23:44:04.266640   47076 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0803 23:44:04.266649   47076 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0803 23:44:04.266654   47076 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0803 23:44:04.266663   47076 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0803 23:44:04.266672   47076 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0803 23:44:04.266677   47076 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0803 23:44:04.266686   47076 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0803 23:44:04.266689   47076 command_runner.go:130] > # Example:
	I0803 23:44:04.266694   47076 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0803 23:44:04.266698   47076 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0803 23:44:04.266702   47076 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0803 23:44:04.266707   47076 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0803 23:44:04.266710   47076 command_runner.go:130] > # cpuset = 0
	I0803 23:44:04.266721   47076 command_runner.go:130] > # cpushares = "0-1"
	I0803 23:44:04.266724   47076 command_runner.go:130] > # Where:
	I0803 23:44:04.266730   47076 command_runner.go:130] > # The workload name is workload-type.
	I0803 23:44:04.266736   47076 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0803 23:44:04.266741   47076 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0803 23:44:04.266746   47076 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0803 23:44:04.266753   47076 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0803 23:44:04.266757   47076 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0803 23:44:04.266762   47076 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0803 23:44:04.266768   47076 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0803 23:44:04.266772   47076 command_runner.go:130] > # Default value is set to true
	I0803 23:44:04.266775   47076 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0803 23:44:04.266782   47076 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0803 23:44:04.266787   47076 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0803 23:44:04.266791   47076 command_runner.go:130] > # Default value is set to 'false'
	I0803 23:44:04.266795   47076 command_runner.go:130] > # disable_hostport_mapping = false
	I0803 23:44:04.266801   47076 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0803 23:44:04.266803   47076 command_runner.go:130] > #
	I0803 23:44:04.266808   47076 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0803 23:44:04.266814   47076 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0803 23:44:04.266819   47076 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0803 23:44:04.266825   47076 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0803 23:44:04.266832   47076 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0803 23:44:04.266835   47076 command_runner.go:130] > [crio.image]
	I0803 23:44:04.266840   47076 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0803 23:44:04.266844   47076 command_runner.go:130] > # default_transport = "docker://"
	I0803 23:44:04.266849   47076 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0803 23:44:04.266855   47076 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0803 23:44:04.266859   47076 command_runner.go:130] > # global_auth_file = ""
	I0803 23:44:04.266866   47076 command_runner.go:130] > # The image used to instantiate infra containers.
	I0803 23:44:04.266870   47076 command_runner.go:130] > # This option supports live configuration reload.
	I0803 23:44:04.266874   47076 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0803 23:44:04.266879   47076 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0803 23:44:04.266887   47076 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0803 23:44:04.266891   47076 command_runner.go:130] > # This option supports live configuration reload.
	I0803 23:44:04.266894   47076 command_runner.go:130] > # pause_image_auth_file = ""
	I0803 23:44:04.266905   47076 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0803 23:44:04.266913   47076 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0803 23:44:04.266921   47076 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0803 23:44:04.266928   47076 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0803 23:44:04.266934   47076 command_runner.go:130] > # pause_command = "/pause"
	I0803 23:44:04.266945   47076 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0803 23:44:04.266953   47076 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0803 23:44:04.266964   47076 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0803 23:44:04.266973   47076 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0803 23:44:04.266984   47076 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0803 23:44:04.266994   47076 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0803 23:44:04.267002   47076 command_runner.go:130] > # pinned_images = [
	I0803 23:44:04.267007   47076 command_runner.go:130] > # ]
	I0803 23:44:04.267018   47076 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0803 23:44:04.267034   47076 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0803 23:44:04.267046   47076 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0803 23:44:04.267057   47076 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0803 23:44:04.267067   47076 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0803 23:44:04.267078   47076 command_runner.go:130] > # signature_policy = ""
	I0803 23:44:04.267084   47076 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0803 23:44:04.267097   47076 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0803 23:44:04.267108   47076 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0803 23:44:04.267117   47076 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0803 23:44:04.267128   47076 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0803 23:44:04.267139   47076 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0803 23:44:04.267147   47076 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0803 23:44:04.267159   47076 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0803 23:44:04.267168   47076 command_runner.go:130] > # changing them here.
	I0803 23:44:04.267184   47076 command_runner.go:130] > # insecure_registries = [
	I0803 23:44:04.267192   47076 command_runner.go:130] > # ]
	I0803 23:44:04.267202   47076 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0803 23:44:04.267213   47076 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0803 23:44:04.267219   47076 command_runner.go:130] > # image_volumes = "mkdir"
	I0803 23:44:04.267228   47076 command_runner.go:130] > # Temporary directory to use for storing big files
	I0803 23:44:04.267237   47076 command_runner.go:130] > # big_files_temporary_dir = ""
	I0803 23:44:04.267243   47076 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0803 23:44:04.267253   47076 command_runner.go:130] > # CNI plugins.
	I0803 23:44:04.267258   47076 command_runner.go:130] > [crio.network]
	I0803 23:44:04.267264   47076 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0803 23:44:04.267273   47076 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0803 23:44:04.267279   47076 command_runner.go:130] > # cni_default_network = ""
	I0803 23:44:04.267285   47076 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0803 23:44:04.267290   47076 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0803 23:44:04.267301   47076 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0803 23:44:04.267307   47076 command_runner.go:130] > # plugin_dirs = [
	I0803 23:44:04.267315   47076 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0803 23:44:04.267320   47076 command_runner.go:130] > # ]
	I0803 23:44:04.267332   47076 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0803 23:44:04.267341   47076 command_runner.go:130] > [crio.metrics]
	I0803 23:44:04.267349   47076 command_runner.go:130] > # Globally enable or disable metrics support.
	I0803 23:44:04.267356   47076 command_runner.go:130] > enable_metrics = true
	I0803 23:44:04.267361   47076 command_runner.go:130] > # Specify enabled metrics collectors.
	I0803 23:44:04.267367   47076 command_runner.go:130] > # Per default all metrics are enabled.
	I0803 23:44:04.267373   47076 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0803 23:44:04.267382   47076 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0803 23:44:04.267387   47076 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0803 23:44:04.267391   47076 command_runner.go:130] > # metrics_collectors = [
	I0803 23:44:04.267395   47076 command_runner.go:130] > # 	"operations",
	I0803 23:44:04.267399   47076 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0803 23:44:04.267406   47076 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0803 23:44:04.267410   47076 command_runner.go:130] > # 	"operations_errors",
	I0803 23:44:04.267414   47076 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0803 23:44:04.267418   47076 command_runner.go:130] > # 	"image_pulls_by_name",
	I0803 23:44:04.267428   47076 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0803 23:44:04.267434   47076 command_runner.go:130] > # 	"image_pulls_failures",
	I0803 23:44:04.267443   47076 command_runner.go:130] > # 	"image_pulls_successes",
	I0803 23:44:04.267449   47076 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0803 23:44:04.267458   47076 command_runner.go:130] > # 	"image_layer_reuse",
	I0803 23:44:04.267467   47076 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0803 23:44:04.267474   47076 command_runner.go:130] > # 	"containers_oom_total",
	I0803 23:44:04.267482   47076 command_runner.go:130] > # 	"containers_oom",
	I0803 23:44:04.267489   47076 command_runner.go:130] > # 	"processes_defunct",
	I0803 23:44:04.267507   47076 command_runner.go:130] > # 	"operations_total",
	I0803 23:44:04.267561   47076 command_runner.go:130] > # 	"operations_latency_seconds",
	I0803 23:44:04.267578   47076 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0803 23:44:04.267582   47076 command_runner.go:130] > # 	"operations_errors_total",
	I0803 23:44:04.267589   47076 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0803 23:44:04.267594   47076 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0803 23:44:04.267600   47076 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0803 23:44:04.267605   47076 command_runner.go:130] > # 	"image_pulls_success_total",
	I0803 23:44:04.267609   47076 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0803 23:44:04.267616   47076 command_runner.go:130] > # 	"containers_oom_count_total",
	I0803 23:44:04.267622   47076 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0803 23:44:04.267627   47076 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0803 23:44:04.267645   47076 command_runner.go:130] > # ]
	I0803 23:44:04.267653   47076 command_runner.go:130] > # The port on which the metrics server will listen.
	I0803 23:44:04.267657   47076 command_runner.go:130] > # metrics_port = 9090
	I0803 23:44:04.267662   47076 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0803 23:44:04.267668   47076 command_runner.go:130] > # metrics_socket = ""
	I0803 23:44:04.267673   47076 command_runner.go:130] > # The certificate for the secure metrics server.
	I0803 23:44:04.267679   47076 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0803 23:44:04.267687   47076 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0803 23:44:04.267692   47076 command_runner.go:130] > # certificate on any modification event.
	I0803 23:44:04.267698   47076 command_runner.go:130] > # metrics_cert = ""
	I0803 23:44:04.267702   47076 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0803 23:44:04.267707   47076 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0803 23:44:04.267713   47076 command_runner.go:130] > # metrics_key = ""
	I0803 23:44:04.267719   47076 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0803 23:44:04.267725   47076 command_runner.go:130] > [crio.tracing]
	I0803 23:44:04.267730   47076 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0803 23:44:04.267736   47076 command_runner.go:130] > # enable_tracing = false
	I0803 23:44:04.267741   47076 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0803 23:44:04.267748   47076 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0803 23:44:04.267755   47076 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0803 23:44:04.267761   47076 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0803 23:44:04.267765   47076 command_runner.go:130] > # CRI-O NRI configuration.
	I0803 23:44:04.267769   47076 command_runner.go:130] > [crio.nri]
	I0803 23:44:04.267773   47076 command_runner.go:130] > # Globally enable or disable NRI.
	I0803 23:44:04.267782   47076 command_runner.go:130] > # enable_nri = false
	I0803 23:44:04.267788   47076 command_runner.go:130] > # NRI socket to listen on.
	I0803 23:44:04.267793   47076 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0803 23:44:04.267797   47076 command_runner.go:130] > # NRI plugin directory to use.
	I0803 23:44:04.267801   47076 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0803 23:44:04.267808   47076 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0803 23:44:04.267812   47076 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0803 23:44:04.267819   47076 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0803 23:44:04.267828   47076 command_runner.go:130] > # nri_disable_connections = false
	I0803 23:44:04.267835   47076 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0803 23:44:04.267840   47076 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0803 23:44:04.267845   47076 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0803 23:44:04.267852   47076 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0803 23:44:04.267858   47076 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0803 23:44:04.267863   47076 command_runner.go:130] > [crio.stats]
	I0803 23:44:04.267869   47076 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0803 23:44:04.267874   47076 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0803 23:44:04.267881   47076 command_runner.go:130] > # stats_collection_period = 0
	I0803 23:44:04.267912   47076 command_runner.go:130] ! time="2024-08-03 23:44:04.221797741Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0803 23:44:04.267929   47076 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0803 23:44:04.268083   47076 cni.go:84] Creating CNI manager for ""
	I0803 23:44:04.268101   47076 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0803 23:44:04.268114   47076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:44:04.268143   47076 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-626202 NodeName:multinode-626202 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:44:04.268264   47076 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-626202"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:44:04.268328   47076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:44:04.279466   47076 command_runner.go:130] > kubeadm
	I0803 23:44:04.279485   47076 command_runner.go:130] > kubectl
	I0803 23:44:04.279490   47076 command_runner.go:130] > kubelet
	I0803 23:44:04.279506   47076 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:44:04.279567   47076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 23:44:04.290075   47076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0803 23:44:04.308331   47076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:44:04.325905   47076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0803 23:44:04.343037   47076 ssh_runner.go:195] Run: grep 192.168.39.176	control-plane.minikube.internal$ /etc/hosts
	I0803 23:44:04.347411   47076 command_runner.go:130] > 192.168.39.176	control-plane.minikube.internal
	I0803 23:44:04.347492   47076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:44:04.485790   47076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:44:04.503105   47076 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202 for IP: 192.168.39.176
	I0803 23:44:04.503128   47076 certs.go:194] generating shared ca certs ...
	I0803 23:44:04.503149   47076 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:44:04.503307   47076 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 23:44:04.503362   47076 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 23:44:04.503375   47076 certs.go:256] generating profile certs ...
	I0803 23:44:04.503493   47076 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/client.key
	I0803 23:44:04.503572   47076 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/apiserver.key.a1d01b81
	I0803 23:44:04.503621   47076 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/proxy-client.key
	I0803 23:44:04.503635   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:44:04.503656   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:44:04.503674   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:44:04.503698   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:44:04.503718   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:44:04.503737   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:44:04.503755   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:44:04.503772   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:44:04.503843   47076 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0803 23:44:04.503881   47076 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0803 23:44:04.503893   47076 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 23:44:04.503930   47076 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 23:44:04.503973   47076 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:44:04.504005   47076 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 23:44:04.504063   47076 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:44:04.504111   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0803 23:44:04.504133   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:44:04.504159   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0803 23:44:04.504750   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:44:04.532103   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:44:04.559044   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:44:04.586637   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 23:44:04.613899   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0803 23:44:04.639888   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 23:44:04.669124   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:44:04.696224   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:44:04.722613   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0803 23:44:04.749586   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:44:04.777627   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0803 23:44:04.805629   47076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:44:04.824455   47076 ssh_runner.go:195] Run: openssl version
	I0803 23:44:04.830675   47076 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0803 23:44:04.830860   47076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:44:04.844332   47076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:44:04.849332   47076 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:44:04.849377   47076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:44:04.849428   47076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:44:04.855600   47076 command_runner.go:130] > b5213941
	I0803 23:44:04.856218   47076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:44:04.868766   47076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0803 23:44:04.882510   47076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0803 23:44:04.887796   47076 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:44:04.888069   47076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:44:04.888132   47076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0803 23:44:04.894011   47076 command_runner.go:130] > 51391683
	I0803 23:44:04.894310   47076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0803 23:44:04.905669   47076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0803 23:44:04.918416   47076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0803 23:44:04.923792   47076 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:44:04.923924   47076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:44:04.923978   47076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0803 23:44:04.929865   47076 command_runner.go:130] > 3ec20f2e
	I0803 23:44:04.929950   47076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:44:04.941014   47076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:44:04.945914   47076 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:44:04.945961   47076 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0803 23:44:04.945989   47076 command_runner.go:130] > Device: 253,1	Inode: 5244971     Links: 1
	I0803 23:44:04.946005   47076 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0803 23:44:04.946019   47076 command_runner.go:130] > Access: 2024-08-03 23:37:03.769188496 +0000
	I0803 23:44:04.946029   47076 command_runner.go:130] > Modify: 2024-08-03 23:37:03.769188496 +0000
	I0803 23:44:04.946039   47076 command_runner.go:130] > Change: 2024-08-03 23:37:03.769188496 +0000
	I0803 23:44:04.946049   47076 command_runner.go:130] >  Birth: 2024-08-03 23:37:03.769188496 +0000
	I0803 23:44:04.946165   47076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 23:44:04.952170   47076 command_runner.go:130] > Certificate will not expire
	I0803 23:44:04.952372   47076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 23:44:04.958694   47076 command_runner.go:130] > Certificate will not expire
	I0803 23:44:04.958768   47076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 23:44:04.964948   47076 command_runner.go:130] > Certificate will not expire
	I0803 23:44:04.965023   47076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 23:44:04.971341   47076 command_runner.go:130] > Certificate will not expire
	I0803 23:44:04.971414   47076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 23:44:04.977645   47076 command_runner.go:130] > Certificate will not expire
	I0803 23:44:04.977947   47076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 23:44:04.984286   47076 command_runner.go:130] > Certificate will not expire
	I0803 23:44:04.984358   47076 kubeadm.go:392] StartCluster: {Name:multinode-626202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-626202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:44:04.984501   47076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:44:04.984560   47076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:44:05.031734   47076 command_runner.go:130] > ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649
	I0803 23:44:05.031756   47076 command_runner.go:130] > 7a258641e738f9ae8cc2ed8329803d2e21e651613b61b138256356eee892088c
	I0803 23:44:05.031762   47076 command_runner.go:130] > 52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d
	I0803 23:44:05.031770   47076 command_runner.go:130] > 7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31
	I0803 23:44:05.031778   47076 command_runner.go:130] > 661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed
	I0803 23:44:05.031786   47076 command_runner.go:130] > 08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511
	I0803 23:44:05.031847   47076 command_runner.go:130] > 10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f
	I0803 23:44:05.031892   47076 command_runner.go:130] > b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509
	I0803 23:44:05.033880   47076 cri.go:89] found id: "ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649"
	I0803 23:44:05.033897   47076 cri.go:89] found id: "7a258641e738f9ae8cc2ed8329803d2e21e651613b61b138256356eee892088c"
	I0803 23:44:05.033900   47076 cri.go:89] found id: "52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d"
	I0803 23:44:05.033903   47076 cri.go:89] found id: "7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31"
	I0803 23:44:05.033906   47076 cri.go:89] found id: "661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed"
	I0803 23:44:05.033909   47076 cri.go:89] found id: "08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511"
	I0803 23:44:05.033912   47076 cri.go:89] found id: "10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f"
	I0803 23:44:05.033914   47076 cri.go:89] found id: "b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509"
	I0803 23:44:05.033918   47076 cri.go:89] found id: ""
	I0803 23:44:05.033985   47076 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.333833090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722728750333807122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1e0a281-a6c5-444e-a506-7b308af535b6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.334304270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=668f7e79-6969-4d12-985e-c4dc2bc0e21d name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.334384698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=668f7e79-6969-4d12-985e-c4dc2bc0e21d name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.334758677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf31911075b9b1edd637b7b575954794781e71aa72e9143716cbeb7ed1f6915e,PodSandboxId:adad4dd335f68e378bd838ac9e8766ba63753061fed3f5c4bb7feeed73c2f9a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722728685884552568,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2dba722c179a4a30552f5af50d76eca683193b0a6484487770a3dd8bb4eee11,PodSandboxId:a189a8637d24db7a906094d42ab0047658c2b24c38e03bbcacb6c673a2eae26a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722728652407186055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce603ac16e9ae5dd46dc3b3eadde80c28b01868b0e9374cd000506780001c83,PodSandboxId:7def7c2c9c103841fa70784d535819c7ac16d577228e6aba95de2dfc7975f3fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728652358764948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e933251cf98c19f63ba7dc3aca722cfe7bd5d85499591f135ec09da928b758fb,PodSandboxId:ccc61d59726719f7dfae99201a8b848eb141a9f4838b2732f704b8acd792a663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722728652225599901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]
string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7cb032cc115fc98972f1edd080bae32ebe9f751d3bd4d407e242a59168116f3,PodSandboxId:a8684e3d62987c29e2e099e922fc32783e52f11300e10aab32117ec56ff14468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728652228948780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.ku
bernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d5e27a3f1a2b539ac64217ff2530d7f922691a1c792cdcc0cff2522b77d7f6,PodSandboxId:2bd384bb5a28b6264723f25f7d3563d288dbd38eab5fd22deac98e61120f8872,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728647390735689,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306098d465b998985517da6000dbb16bb97dd16e5202e0818c4d43c9ac33ced,PodSandboxId:78b2c2c7a7a681f954a40b4bf20f365fa14472fb903f15a101d6ced6fab07202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728647396122689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5fe95be143b57cff55a5e6398be7fcf431c52eba68efe1dfd18559c58d5c23,PodSandboxId:c021f12a225f344022db160588755786e7d6171ec65565c3e7d09e4afc4b0166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728647306200190,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fae4409b535bab6700793ef3f80520e5c1c30709255101d6f1268f4769a72fc,PodSandboxId:4765d15aa38e7c2c069c01d56f32645231f8e9b07b3a52a2444adecc5439f11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728647299127752,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27175547d279f0ad5cd63ba304ce31415d589ef94f9fa6f3ccfcc5fd3230dc73,PodSandboxId:8b26b6ab2746a895c12c033b3de8d1063feb220bf68584015d9c0d62e0225401,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722728323214680052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649,PodSandboxId:5d2f859ce857dd7f2bc9633b23661dd500c49c3d6a1384b236dd44aa759f8cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728264728692376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a258641e738f9ae8cc2ed8329803d2e21e651613b61b138256356eee892088c,PodSandboxId:acc17b1aeafc40ff353266ffb0fec10dba76435e1e769c3a715d163dae86d437,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722728264661011746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.kubernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d,PodSandboxId:7434a1f6067be87e858afb6286b91a9f254775b7071efb3784f1e087abbe8046,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728252679722241,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31,PodSandboxId:9a2ee63ac405ad1b3d3928aca1e02bbe7169c5b06d8693d32f773947db605785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728248784734123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed,PodSandboxId:bdb3f8907608202ff6896273d9113b2a2ed0fd0cfd25b04db52dc4b60606301c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728228296147286,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d
,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511,PodSandboxId:334b0982686a3a324ac01ae386d9c2be5ae576ec3f379cab82ab3dbbbc77546d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722728228259112023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:
map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f,PodSandboxId:f746e6aa04c0c34b0101d6494c74a218496949fe3274aca2c7494d2312c82642,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722728228216824235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509,PodSandboxId:6e1b8fce3835e850f1bc6e8ba513379e7bc334fae4cc98ceeecf169185c3a5e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722728228158748513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=668f7e79-6969-4d12-985e-c4dc2bc0e21d name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.377757583Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e0f0579-f07c-424d-9147-9d7bfeb38b13 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.377853557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e0f0579-f07c-424d-9147-9d7bfeb38b13 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.379636835Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4dd911f1-a886-4a02-855a-a03c45eb9872 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.380115391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722728750380092638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4dd911f1-a886-4a02-855a-a03c45eb9872 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.380857650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c8f4c71-ccec-476a-ab2f-6d934e7bc21d name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.380931120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c8f4c71-ccec-476a-ab2f-6d934e7bc21d name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.382356674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf31911075b9b1edd637b7b575954794781e71aa72e9143716cbeb7ed1f6915e,PodSandboxId:adad4dd335f68e378bd838ac9e8766ba63753061fed3f5c4bb7feeed73c2f9a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722728685884552568,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2dba722c179a4a30552f5af50d76eca683193b0a6484487770a3dd8bb4eee11,PodSandboxId:a189a8637d24db7a906094d42ab0047658c2b24c38e03bbcacb6c673a2eae26a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722728652407186055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce603ac16e9ae5dd46dc3b3eadde80c28b01868b0e9374cd000506780001c83,PodSandboxId:7def7c2c9c103841fa70784d535819c7ac16d577228e6aba95de2dfc7975f3fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728652358764948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e933251cf98c19f63ba7dc3aca722cfe7bd5d85499591f135ec09da928b758fb,PodSandboxId:ccc61d59726719f7dfae99201a8b848eb141a9f4838b2732f704b8acd792a663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722728652225599901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]
string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7cb032cc115fc98972f1edd080bae32ebe9f751d3bd4d407e242a59168116f3,PodSandboxId:a8684e3d62987c29e2e099e922fc32783e52f11300e10aab32117ec56ff14468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728652228948780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.ku
bernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d5e27a3f1a2b539ac64217ff2530d7f922691a1c792cdcc0cff2522b77d7f6,PodSandboxId:2bd384bb5a28b6264723f25f7d3563d288dbd38eab5fd22deac98e61120f8872,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728647390735689,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306098d465b998985517da6000dbb16bb97dd16e5202e0818c4d43c9ac33ced,PodSandboxId:78b2c2c7a7a681f954a40b4bf20f365fa14472fb903f15a101d6ced6fab07202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728647396122689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5fe95be143b57cff55a5e6398be7fcf431c52eba68efe1dfd18559c58d5c23,PodSandboxId:c021f12a225f344022db160588755786e7d6171ec65565c3e7d09e4afc4b0166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728647306200190,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fae4409b535bab6700793ef3f80520e5c1c30709255101d6f1268f4769a72fc,PodSandboxId:4765d15aa38e7c2c069c01d56f32645231f8e9b07b3a52a2444adecc5439f11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728647299127752,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27175547d279f0ad5cd63ba304ce31415d589ef94f9fa6f3ccfcc5fd3230dc73,PodSandboxId:8b26b6ab2746a895c12c033b3de8d1063feb220bf68584015d9c0d62e0225401,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722728323214680052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649,PodSandboxId:5d2f859ce857dd7f2bc9633b23661dd500c49c3d6a1384b236dd44aa759f8cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728264728692376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a258641e738f9ae8cc2ed8329803d2e21e651613b61b138256356eee892088c,PodSandboxId:acc17b1aeafc40ff353266ffb0fec10dba76435e1e769c3a715d163dae86d437,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722728264661011746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.kubernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d,PodSandboxId:7434a1f6067be87e858afb6286b91a9f254775b7071efb3784f1e087abbe8046,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728252679722241,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31,PodSandboxId:9a2ee63ac405ad1b3d3928aca1e02bbe7169c5b06d8693d32f773947db605785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728248784734123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed,PodSandboxId:bdb3f8907608202ff6896273d9113b2a2ed0fd0cfd25b04db52dc4b60606301c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728228296147286,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d
,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511,PodSandboxId:334b0982686a3a324ac01ae386d9c2be5ae576ec3f379cab82ab3dbbbc77546d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722728228259112023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:
map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f,PodSandboxId:f746e6aa04c0c34b0101d6494c74a218496949fe3274aca2c7494d2312c82642,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722728228216824235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509,PodSandboxId:6e1b8fce3835e850f1bc6e8ba513379e7bc334fae4cc98ceeecf169185c3a5e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722728228158748513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c8f4c71-ccec-476a-ab2f-6d934e7bc21d name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.429436519Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ab6286f-d8a9-4831-a12a-a2ae1b83ba7c name=/runtime.v1.RuntimeService/Version
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.429546632Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ab6286f-d8a9-4831-a12a-a2ae1b83ba7c name=/runtime.v1.RuntimeService/Version
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.433587463Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47822b78-10d6-4597-8609-6a0c4d893146 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.434009235Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722728750433985330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47822b78-10d6-4597-8609-6a0c4d893146 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.434894514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=804a7572-9d85-4da8-9ade-733aa171dbe5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.434968011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=804a7572-9d85-4da8-9ade-733aa171dbe5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.435475308Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf31911075b9b1edd637b7b575954794781e71aa72e9143716cbeb7ed1f6915e,PodSandboxId:adad4dd335f68e378bd838ac9e8766ba63753061fed3f5c4bb7feeed73c2f9a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722728685884552568,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2dba722c179a4a30552f5af50d76eca683193b0a6484487770a3dd8bb4eee11,PodSandboxId:a189a8637d24db7a906094d42ab0047658c2b24c38e03bbcacb6c673a2eae26a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722728652407186055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce603ac16e9ae5dd46dc3b3eadde80c28b01868b0e9374cd000506780001c83,PodSandboxId:7def7c2c9c103841fa70784d535819c7ac16d577228e6aba95de2dfc7975f3fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728652358764948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e933251cf98c19f63ba7dc3aca722cfe7bd5d85499591f135ec09da928b758fb,PodSandboxId:ccc61d59726719f7dfae99201a8b848eb141a9f4838b2732f704b8acd792a663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722728652225599901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]
string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7cb032cc115fc98972f1edd080bae32ebe9f751d3bd4d407e242a59168116f3,PodSandboxId:a8684e3d62987c29e2e099e922fc32783e52f11300e10aab32117ec56ff14468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728652228948780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.ku
bernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d5e27a3f1a2b539ac64217ff2530d7f922691a1c792cdcc0cff2522b77d7f6,PodSandboxId:2bd384bb5a28b6264723f25f7d3563d288dbd38eab5fd22deac98e61120f8872,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728647390735689,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306098d465b998985517da6000dbb16bb97dd16e5202e0818c4d43c9ac33ced,PodSandboxId:78b2c2c7a7a681f954a40b4bf20f365fa14472fb903f15a101d6ced6fab07202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728647396122689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5fe95be143b57cff55a5e6398be7fcf431c52eba68efe1dfd18559c58d5c23,PodSandboxId:c021f12a225f344022db160588755786e7d6171ec65565c3e7d09e4afc4b0166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728647306200190,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fae4409b535bab6700793ef3f80520e5c1c30709255101d6f1268f4769a72fc,PodSandboxId:4765d15aa38e7c2c069c01d56f32645231f8e9b07b3a52a2444adecc5439f11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728647299127752,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27175547d279f0ad5cd63ba304ce31415d589ef94f9fa6f3ccfcc5fd3230dc73,PodSandboxId:8b26b6ab2746a895c12c033b3de8d1063feb220bf68584015d9c0d62e0225401,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722728323214680052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649,PodSandboxId:5d2f859ce857dd7f2bc9633b23661dd500c49c3d6a1384b236dd44aa759f8cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728264728692376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a258641e738f9ae8cc2ed8329803d2e21e651613b61b138256356eee892088c,PodSandboxId:acc17b1aeafc40ff353266ffb0fec10dba76435e1e769c3a715d163dae86d437,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722728264661011746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.kubernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d,PodSandboxId:7434a1f6067be87e858afb6286b91a9f254775b7071efb3784f1e087abbe8046,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728252679722241,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31,PodSandboxId:9a2ee63ac405ad1b3d3928aca1e02bbe7169c5b06d8693d32f773947db605785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728248784734123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed,PodSandboxId:bdb3f8907608202ff6896273d9113b2a2ed0fd0cfd25b04db52dc4b60606301c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728228296147286,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d
,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511,PodSandboxId:334b0982686a3a324ac01ae386d9c2be5ae576ec3f379cab82ab3dbbbc77546d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722728228259112023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:
map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f,PodSandboxId:f746e6aa04c0c34b0101d6494c74a218496949fe3274aca2c7494d2312c82642,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722728228216824235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509,PodSandboxId:6e1b8fce3835e850f1bc6e8ba513379e7bc334fae4cc98ceeecf169185c3a5e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722728228158748513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=804a7572-9d85-4da8-9ade-733aa171dbe5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.477486867Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28937442-25e8-4d11-a489-e97c2ec11c62 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.477575268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28937442-25e8-4d11-a489-e97c2ec11c62 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.478853120Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22542302-8c29-4822-bfe0-c4f2557f8868 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.479363531Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722728750479339433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22542302-8c29-4822-bfe0-c4f2557f8868 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.479808643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=166bdd63-0366-48e3-8803-559a8d012e9c name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.479880283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=166bdd63-0366-48e3-8803-559a8d012e9c name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:45:50 multinode-626202 crio[2916]: time="2024-08-03 23:45:50.480351295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf31911075b9b1edd637b7b575954794781e71aa72e9143716cbeb7ed1f6915e,PodSandboxId:adad4dd335f68e378bd838ac9e8766ba63753061fed3f5c4bb7feeed73c2f9a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722728685884552568,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2dba722c179a4a30552f5af50d76eca683193b0a6484487770a3dd8bb4eee11,PodSandboxId:a189a8637d24db7a906094d42ab0047658c2b24c38e03bbcacb6c673a2eae26a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722728652407186055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce603ac16e9ae5dd46dc3b3eadde80c28b01868b0e9374cd000506780001c83,PodSandboxId:7def7c2c9c103841fa70784d535819c7ac16d577228e6aba95de2dfc7975f3fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728652358764948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e933251cf98c19f63ba7dc3aca722cfe7bd5d85499591f135ec09da928b758fb,PodSandboxId:ccc61d59726719f7dfae99201a8b848eb141a9f4838b2732f704b8acd792a663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722728652225599901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]
string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7cb032cc115fc98972f1edd080bae32ebe9f751d3bd4d407e242a59168116f3,PodSandboxId:a8684e3d62987c29e2e099e922fc32783e52f11300e10aab32117ec56ff14468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728652228948780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.ku
bernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d5e27a3f1a2b539ac64217ff2530d7f922691a1c792cdcc0cff2522b77d7f6,PodSandboxId:2bd384bb5a28b6264723f25f7d3563d288dbd38eab5fd22deac98e61120f8872,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728647390735689,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306098d465b998985517da6000dbb16bb97dd16e5202e0818c4d43c9ac33ced,PodSandboxId:78b2c2c7a7a681f954a40b4bf20f365fa14472fb903f15a101d6ced6fab07202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728647396122689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5fe95be143b57cff55a5e6398be7fcf431c52eba68efe1dfd18559c58d5c23,PodSandboxId:c021f12a225f344022db160588755786e7d6171ec65565c3e7d09e4afc4b0166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728647306200190,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fae4409b535bab6700793ef3f80520e5c1c30709255101d6f1268f4769a72fc,PodSandboxId:4765d15aa38e7c2c069c01d56f32645231f8e9b07b3a52a2444adecc5439f11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728647299127752,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27175547d279f0ad5cd63ba304ce31415d589ef94f9fa6f3ccfcc5fd3230dc73,PodSandboxId:8b26b6ab2746a895c12c033b3de8d1063feb220bf68584015d9c0d62e0225401,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722728323214680052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649,PodSandboxId:5d2f859ce857dd7f2bc9633b23661dd500c49c3d6a1384b236dd44aa759f8cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728264728692376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a258641e738f9ae8cc2ed8329803d2e21e651613b61b138256356eee892088c,PodSandboxId:acc17b1aeafc40ff353266ffb0fec10dba76435e1e769c3a715d163dae86d437,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722728264661011746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.kubernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d,PodSandboxId:7434a1f6067be87e858afb6286b91a9f254775b7071efb3784f1e087abbe8046,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728252679722241,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31,PodSandboxId:9a2ee63ac405ad1b3d3928aca1e02bbe7169c5b06d8693d32f773947db605785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728248784734123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed,PodSandboxId:bdb3f8907608202ff6896273d9113b2a2ed0fd0cfd25b04db52dc4b60606301c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728228296147286,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d
,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511,PodSandboxId:334b0982686a3a324ac01ae386d9c2be5ae576ec3f379cab82ab3dbbbc77546d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722728228259112023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:
map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f,PodSandboxId:f746e6aa04c0c34b0101d6494c74a218496949fe3274aca2c7494d2312c82642,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722728228216824235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509,PodSandboxId:6e1b8fce3835e850f1bc6e8ba513379e7bc334fae4cc98ceeecf169185c3a5e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722728228158748513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=166bdd63-0366-48e3-8803-559a8d012e9c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bf31911075b9b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   adad4dd335f68       busybox-fc5497c4f-lj84f
	a2dba722c179a       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   a189a8637d24d       kindnet-jhldg
	1ce603ac16e9a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   7def7c2c9c103       coredns-7db6d8ff4d-29fhz
	f7cb032cc115f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   a8684e3d62987       storage-provisioner
	e933251cf98c1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   ccc61d5972671       kube-proxy-26jcw
	3306098d465b9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   78b2c2c7a7a68       kube-scheduler-multinode-626202
	22d5e27a3f1a2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   2bd384bb5a28b       etcd-multinode-626202
	8a5fe95be143b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   c021f12a225f3       kube-apiserver-multinode-626202
	0fae4409b535b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   4765d15aa38e7       kube-controller-manager-multinode-626202
	27175547d279f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   8b26b6ab2746a       busybox-fc5497c4f-lj84f
	ed8181672e8c9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   5d2f859ce857d       coredns-7db6d8ff4d-29fhz
	7a258641e738f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   acc17b1aeafc4       storage-provisioner
	52ac99500c4cf       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   7434a1f6067be       kindnet-jhldg
	7e8ea75035d5c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   9a2ee63ac405a       kube-proxy-26jcw
	661f87888da85       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   bdb3f89076082       etcd-multinode-626202
	08f8e99e72584       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   334b0982686a3       kube-apiserver-multinode-626202
	10ca9a5bb8d9c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   f746e6aa04c0c       kube-scheduler-multinode-626202
	b7994126d209c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   6e1b8fce3835e       kube-controller-manager-multinode-626202
	
	
	==> coredns [1ce603ac16e9ae5dd46dc3b3eadde80c28b01868b0e9374cd000506780001c83] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36971 - 16893 "HINFO IN 9099449929247806992.1502052661215963512. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015019218s
	
	
	==> coredns [ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649] <==
	[INFO] 10.244.0.3:36685 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001840134s
	[INFO] 10.244.0.3:45668 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088915s
	[INFO] 10.244.0.3:33174 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000035022s
	[INFO] 10.244.0.3:42026 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001122494s
	[INFO] 10.244.0.3:46337 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043322s
	[INFO] 10.244.0.3:44769 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046396s
	[INFO] 10.244.0.3:45333 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089521s
	[INFO] 10.244.1.2:55558 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170162s
	[INFO] 10.244.1.2:55873 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182693s
	[INFO] 10.244.1.2:40311 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174612s
	[INFO] 10.244.1.2:53164 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063875s
	[INFO] 10.244.0.3:40527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130165s
	[INFO] 10.244.0.3:56260 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082293s
	[INFO] 10.244.0.3:60666 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062261s
	[INFO] 10.244.0.3:60582 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063584s
	[INFO] 10.244.1.2:50593 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134382s
	[INFO] 10.244.1.2:57543 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150736s
	[INFO] 10.244.1.2:45727 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127395s
	[INFO] 10.244.1.2:58801 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000149701s
	[INFO] 10.244.0.3:33370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011872s
	[INFO] 10.244.0.3:49551 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130042s
	[INFO] 10.244.0.3:56273 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000085081s
	[INFO] 10.244.0.3:51286 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056031s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-626202
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-626202
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=multinode-626202
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_37_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-626202
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:45:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:44:11 +0000   Sat, 03 Aug 2024 23:37:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:44:11 +0000   Sat, 03 Aug 2024 23:37:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:44:11 +0000   Sat, 03 Aug 2024 23:37:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:44:11 +0000   Sat, 03 Aug 2024 23:37:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    multinode-626202
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 447bbcc6652343e7a8f7b43a086853c1
	  System UUID:                447bbcc6-6523-43e7-a8f7-b43a086853c1
	  Boot ID:                    20a00bd5-bce6-4c4b-b103-e4236543bb16
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lj84f                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 coredns-7db6d8ff4d-29fhz                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m22s
	  kube-system                 etcd-multinode-626202                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m37s
	  kube-system                 kindnet-jhldg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m22s
	  kube-system                 kube-apiserver-multinode-626202             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-controller-manager-multinode-626202    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-proxy-26jcw                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	  kube-system                 kube-scheduler-multinode-626202             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m21s                  kube-proxy       
	  Normal  Starting                 98s                    kube-proxy       
	  Normal  Starting                 8m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m43s (x8 over 8m43s)  kubelet          Node multinode-626202 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m43s (x8 over 8m43s)  kubelet          Node multinode-626202 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m43s (x7 over 8m43s)  kubelet          Node multinode-626202 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m37s                  kubelet          Node multinode-626202 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m37s                  kubelet          Node multinode-626202 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m37s                  kubelet          Node multinode-626202 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m37s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m23s                  node-controller  Node multinode-626202 event: Registered Node multinode-626202 in Controller
	  Normal  NodeReady                8m6s                   kubelet          Node multinode-626202 status is now: NodeReady
	  Normal  Starting                 104s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)    kubelet          Node multinode-626202 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)    kubelet          Node multinode-626202 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)    kubelet          Node multinode-626202 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           87s                    node-controller  Node multinode-626202 event: Registered Node multinode-626202 in Controller
	
	
	Name:               multinode-626202-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-626202-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=multinode-626202
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_44_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:44:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-626202-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:45:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:45:19 +0000   Sat, 03 Aug 2024 23:44:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:45:19 +0000   Sat, 03 Aug 2024 23:44:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:45:19 +0000   Sat, 03 Aug 2024 23:44:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:45:19 +0000   Sat, 03 Aug 2024 23:45:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    multinode-626202-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5258a704ee2645088936da00b0d990e8
	  System UUID:                5258a704-ee26-4508-8936-da00b0d990e8
	  Boot ID:                    9c64f6bf-023b-49d5-a6fb-dfd7a5691bb0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pzwdv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-4vv8k              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m33s
	  kube-system                 kube-proxy-hb6jt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 57s                    kube-proxy       
	  Normal  Starting                 7m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m34s (x2 over 7m34s)  kubelet          Node multinode-626202-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m34s (x2 over 7m34s)  kubelet          Node multinode-626202-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m34s (x2 over 7m34s)  kubelet          Node multinode-626202-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m13s                  kubelet          Node multinode-626202-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet          Node multinode-626202-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet          Node multinode-626202-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet          Node multinode-626202-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           57s                    node-controller  Node multinode-626202-m02 event: Registered Node multinode-626202-m02 in Controller
	  Normal  NodeReady                42s                    kubelet          Node multinode-626202-m02 status is now: NodeReady
	
	
	Name:               multinode-626202-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-626202-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=multinode-626202
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_45_28_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:45:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-626202-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:45:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:45:47 +0000   Sat, 03 Aug 2024 23:45:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:45:47 +0000   Sat, 03 Aug 2024 23:45:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:45:47 +0000   Sat, 03 Aug 2024 23:45:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:45:47 +0000   Sat, 03 Aug 2024 23:45:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    multinode-626202-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea6fac1513bd4d289d2b08c5c99a0cc8
	  System UUID:                ea6fac15-13bd-4d28-9d2b-08c5c99a0cc8
	  Boot ID:                    b1e58e06-0da7-4962-8e40-94d2ac5dc905
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zv26n       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m36s
	  kube-system                 kube-proxy-hs49z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m31s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m41s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m36s (x2 over 6m37s)  kubelet     Node multinode-626202-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x2 over 6m37s)  kubelet     Node multinode-626202-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x2 over 6m37s)  kubelet     Node multinode-626202-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m15s                  kubelet     Node multinode-626202-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet     Node multinode-626202-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet     Node multinode-626202-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet     Node multinode-626202-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m26s                  kubelet     Node multinode-626202-m03 status is now: NodeReady
	  Normal  Starting                 23s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-626202-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-626202-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-626202-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-626202-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.053105] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.188518] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.118986] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.271804] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[Aug 3 23:37] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[  +5.405608] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[  +0.061146] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.993135] systemd-fstab-generator[1292]: Ignoring "noauto" option for root device
	[  +0.085087] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.161074] systemd-fstab-generator[1494]: Ignoring "noauto" option for root device
	[  +0.107884] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.017324] kauditd_printk_skb: 56 callbacks suppressed
	[Aug 3 23:38] kauditd_printk_skb: 14 callbacks suppressed
	[Aug 3 23:44] systemd-fstab-generator[2835]: Ignoring "noauto" option for root device
	[  +0.155883] systemd-fstab-generator[2847]: Ignoring "noauto" option for root device
	[  +0.164953] systemd-fstab-generator[2861]: Ignoring "noauto" option for root device
	[  +0.139242] systemd-fstab-generator[2873]: Ignoring "noauto" option for root device
	[  +0.276408] systemd-fstab-generator[2901]: Ignoring "noauto" option for root device
	[  +1.923698] systemd-fstab-generator[3001]: Ignoring "noauto" option for root device
	[  +1.986029] systemd-fstab-generator[3126]: Ignoring "noauto" option for root device
	[  +0.828997] kauditd_printk_skb: 144 callbacks suppressed
	[  +5.043857] kauditd_printk_skb: 45 callbacks suppressed
	[ +11.147206] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.783874] systemd-fstab-generator[3958]: Ignoring "noauto" option for root device
	[ +21.666860] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [22d5e27a3f1a2b539ac64217ff2530d7f922691a1c792cdcc0cff2522b77d7f6] <==
	{"level":"info","ts":"2024-08-03T23:44:08.140312Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-03T23:44:08.140355Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-03T23:44:08.139839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b switched to configuration voters=(17801975325160492603)"}
	{"level":"info","ts":"2024-08-03T23:44:08.14051Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","added-peer-id":"f70d523d4475ce3b","added-peer-peer-urls":["https://192.168.39.176:2380"]}
	{"level":"info","ts":"2024-08-03T23:44:08.142398Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:44:08.144285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:44:08.197561Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-03T23:44:08.197853Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-08-03T23:44:08.197883Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-08-03T23:44:08.198083Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f70d523d4475ce3b","initial-advertise-peer-urls":["https://192.168.39.176:2380"],"listen-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.176:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-03T23:44:08.198132Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-03T23:44:09.503095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-03T23:44:09.503204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-03T23:44:09.503327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b received MsgPreVoteResp from f70d523d4475ce3b at term 2"}
	{"level":"info","ts":"2024-08-03T23:44:09.503367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became candidate at term 3"}
	{"level":"info","ts":"2024-08-03T23:44:09.503391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b received MsgVoteResp from f70d523d4475ce3b at term 3"}
	{"level":"info","ts":"2024-08-03T23:44:09.503418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became leader at term 3"}
	{"level":"info","ts":"2024-08-03T23:44:09.50346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f70d523d4475ce3b elected leader f70d523d4475ce3b at term 3"}
	{"level":"info","ts":"2024-08-03T23:44:09.509345Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f70d523d4475ce3b","local-member-attributes":"{Name:multinode-626202 ClientURLs:[https://192.168.39.176:2379]}","request-path":"/0/members/f70d523d4475ce3b/attributes","cluster-id":"40fea5b1ef9207e7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-03T23:44:09.509354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T23:44:09.509593Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-03T23:44:09.509631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-03T23:44:09.509385Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T23:44:09.511566Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-03T23:44:09.51163Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.176:2379"}
	
	
	==> etcd [661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed] <==
	{"level":"info","ts":"2024-08-03T23:37:08.922874Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T23:37:08.923406Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T23:37:08.925288Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-03T23:37:08.932263Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-03T23:37:08.925321Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:37:08.932422Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:37:08.932472Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:37:08.926785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.176:2379"}
	{"level":"info","ts":"2024-08-03T23:37:08.953576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-03T23:38:17.008277Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.407393ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14860630938888900058 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:4e3b911a9aa70dd9>","response":"size:41"}
	{"level":"info","ts":"2024-08-03T23:38:17.009474Z","caller":"traceutil/trace.go:171","msg":"trace[1542968404] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"183.694386ms","start":"2024-08-03T23:38:16.825744Z","end":"2024-08-03T23:38:17.009438Z","steps":["trace[1542968404] 'process raft request'  (duration: 183.382637ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T23:38:22.160013Z","caller":"traceutil/trace.go:171","msg":"trace[345099284] transaction","detail":"{read_only:false; response_revision:533; number_of_response:1; }","duration":"192.113424ms","start":"2024-08-03T23:38:21.967884Z","end":"2024-08-03T23:38:22.159997Z","steps":["trace[345099284] 'process raft request'  (duration: 191.71439ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T23:39:14.271572Z","caller":"traceutil/trace.go:171","msg":"trace[494443657] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"236.741273ms","start":"2024-08-03T23:39:14.034774Z","end":"2024-08-03T23:39:14.271515Z","steps":["trace[494443657] 'process raft request'  (duration: 236.644163ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T23:39:14.27254Z","caller":"traceutil/trace.go:171","msg":"trace[1379350830] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"174.352772ms","start":"2024-08-03T23:39:14.098175Z","end":"2024-08-03T23:39:14.272528Z","steps":["trace[1379350830] 'process raft request'  (duration: 174.189875ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T23:42:30.459984Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-03T23:42:30.460543Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-626202","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"]}
	{"level":"warn","ts":"2024-08-03T23:42:30.460659Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-03T23:42:30.460751Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/08/03 23:42:30 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-03T23:42:30.542269Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.176:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-03T23:42:30.542365Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.176:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-03T23:42:30.543867Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f70d523d4475ce3b","current-leader-member-id":"f70d523d4475ce3b"}
	{"level":"info","ts":"2024-08-03T23:42:30.546514Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-08-03T23:42:30.546652Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-08-03T23:42:30.546677Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-626202","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"]}
	
	
	==> kernel <==
	 23:45:51 up 9 min,  0 users,  load average: 0.36, 0.42, 0.25
	Linux multinode-626202 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d] <==
	I0803 23:41:43.715163       1 main.go:299] handling current node
	I0803 23:41:53.712202       1 main.go:295] Handling node with IPs: map[192.168.39.198:{}]
	I0803 23:41:53.712404       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.3.0/24] 
	I0803 23:41:53.712620       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:41:53.712651       1 main.go:299] handling current node
	I0803 23:41:53.712673       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:41:53.712678       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:42:03.720080       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:42:03.720254       1 main.go:299] handling current node
	I0803 23:42:03.720322       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:42:03.720331       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:42:03.720546       1 main.go:295] Handling node with IPs: map[192.168.39.198:{}]
	I0803 23:42:03.720571       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.3.0/24] 
	I0803 23:42:13.718114       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:42:13.718160       1 main.go:299] handling current node
	I0803 23:42:13.718176       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:42:13.718181       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:42:13.718378       1 main.go:295] Handling node with IPs: map[192.168.39.198:{}]
	I0803 23:42:13.718405       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.3.0/24] 
	I0803 23:42:23.721384       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:42:23.721440       1 main.go:299] handling current node
	I0803 23:42:23.721463       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:42:23.721468       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:42:23.721617       1 main.go:295] Handling node with IPs: map[192.168.39.198:{}]
	I0803 23:42:23.721645       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [a2dba722c179a4a30552f5af50d76eca683193b0a6484487770a3dd8bb4eee11] <==
	I0803 23:45:03.516720       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.3.0/24] 
	I0803 23:45:13.513292       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:45:13.513428       1 main.go:299] handling current node
	I0803 23:45:13.513471       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:45:13.513490       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:45:13.513709       1 main.go:295] Handling node with IPs: map[192.168.39.198:{}]
	I0803 23:45:13.513744       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.3.0/24] 
	I0803 23:45:23.517270       1 main.go:295] Handling node with IPs: map[192.168.39.198:{}]
	I0803 23:45:23.517326       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.3.0/24] 
	I0803 23:45:23.517452       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:45:23.517474       1 main.go:299] handling current node
	I0803 23:45:23.517490       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:45:23.517509       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:45:33.517737       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:45:33.517789       1 main.go:299] handling current node
	I0803 23:45:33.517806       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:45:33.517812       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:45:33.518022       1 main.go:295] Handling node with IPs: map[192.168.39.198:{}]
	I0803 23:45:33.518032       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.2.0/24] 
	I0803 23:45:43.516471       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:45:43.516591       1 main.go:299] handling current node
	I0803 23:45:43.516624       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:45:43.516698       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:45:43.516891       1 main.go:295] Handling node with IPs: map[192.168.39.198:{}]
	I0803 23:45:43.516948       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511] <==
	I0803 23:42:30.467480       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0803 23:42:30.467726       1 available_controller.go:439] Shutting down AvailableConditionController
	I0803 23:42:30.467775       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0803 23:42:30.467796       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0803 23:42:30.467840       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0803 23:42:30.467886       1 autoregister_controller.go:165] Shutting down autoregister controller
	W0803 23:42:30.467977       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0803 23:42:30.468029       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0803 23:42:30.468074       1 establishing_controller.go:87] Shutting down EstablishingController
	I0803 23:42:30.468125       1 naming_controller.go:302] Shutting down NamingConditionController
	I0803 23:42:30.468167       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0803 23:42:30.468182       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	W0803 23:42:30.468207       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0803 23:42:30.468208       1 controller.go:129] Ending legacy_token_tracking_controller
	I0803 23:42:30.473553       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0803 23:42:30.473589       1 controller.go:167] Shutting down OpenAPI controller
	I0803 23:42:30.473673       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0803 23:42:30.473698       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0803 23:42:30.473823       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	W0803 23:42:30.474687       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0803 23:42:30.477641       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0803 23:42:30.467984       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0803 23:42:30.482717       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0803 23:42:30.482770       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0803 23:42:30.482788       1 controller.go:84] Shutting down OpenAPI AggregationController
	
	
	==> kube-apiserver [8a5fe95be143b57cff55a5e6398be7fcf431c52eba68efe1dfd18559c58d5c23] <==
	I0803 23:44:10.764046       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0803 23:44:10.861510       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0803 23:44:10.866703       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0803 23:44:10.867724       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0803 23:44:10.867820       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0803 23:44:10.867840       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0803 23:44:10.867855       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0803 23:44:10.867861       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0803 23:44:10.867931       1 shared_informer.go:320] Caches are synced for configmaps
	I0803 23:44:10.868768       1 aggregator.go:165] initial CRD sync complete...
	I0803 23:44:10.868812       1 autoregister_controller.go:141] Starting autoregister controller
	I0803 23:44:10.868819       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0803 23:44:10.868826       1 cache.go:39] Caches are synced for autoregister controller
	I0803 23:44:10.901494       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0803 23:44:10.914405       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0803 23:44:10.914427       1 policy_source.go:224] refreshing policies
	I0803 23:44:10.959969       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0803 23:44:11.773620       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0803 23:44:13.236017       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0803 23:44:13.365800       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0803 23:44:13.381680       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0803 23:44:13.465305       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0803 23:44:13.472469       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0803 23:44:23.313753       1 controller.go:615] quota admission added evaluator for: endpoints
	I0803 23:44:23.362895       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0fae4409b535bab6700793ef3f80520e5c1c30709255101d6f1268f4769a72fc] <==
	I0803 23:44:23.979950       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0803 23:44:23.980122       1 shared_informer.go:320] Caches are synced for garbage collector
	I0803 23:44:44.291315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.481974ms"
	I0803 23:44:44.291401       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.553µs"
	I0803 23:44:44.302349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.080774ms"
	I0803 23:44:44.302424       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.402µs"
	I0803 23:44:48.432631       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-626202-m02\" does not exist"
	I0803 23:44:48.445651       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-626202-m02" podCIDRs=["10.244.1.0/24"]
	I0803 23:44:49.739444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.815µs"
	I0803 23:44:50.321714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.322µs"
	I0803 23:44:50.348787       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.228µs"
	I0803 23:44:50.363507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.592µs"
	I0803 23:44:50.389656       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.764µs"
	I0803 23:44:50.397687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.507µs"
	I0803 23:44:50.400655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.358µs"
	I0803 23:45:08.329552       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:45:08.350308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.109µs"
	I0803 23:45:08.364814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.756µs"
	I0803 23:45:11.732770       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.841671ms"
	I0803 23:45:11.734161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="795.273µs"
	I0803 23:45:26.727829       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:45:27.928299       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-626202-m03\" does not exist"
	I0803 23:45:27.928399       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:45:27.951933       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-626202-m03" podCIDRs=["10.244.2.0/24"]
	I0803 23:45:47.448277       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	
	
	==> kube-controller-manager [b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509] <==
	I0803 23:38:17.011162       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-626202-m02\" does not exist"
	I0803 23:38:17.022101       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-626202-m02" podCIDRs=["10.244.1.0/24"]
	I0803 23:38:17.364314       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-626202-m02"
	I0803 23:38:37.995640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:38:40.206416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.004554ms"
	I0803 23:38:40.233384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.892096ms"
	I0803 23:38:40.234734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.197µs"
	I0803 23:38:40.246676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.271µs"
	I0803 23:38:43.725603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.486764ms"
	I0803 23:38:43.725727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.382µs"
	I0803 23:38:44.017023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.694258ms"
	I0803 23:38:44.017481       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.074µs"
	I0803 23:39:14.275733       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:39:14.275879       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-626202-m03\" does not exist"
	I0803 23:39:14.300672       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-626202-m03" podCIDRs=["10.244.2.0/24"]
	I0803 23:39:17.389711       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-626202-m03"
	I0803 23:39:35.055020       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m03"
	I0803 23:40:03.643068       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:40:04.994094       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-626202-m03\" does not exist"
	I0803 23:40:04.994162       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:40:05.022148       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-626202-m03" podCIDRs=["10.244.3.0/24"]
	I0803 23:40:24.842040       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:41:02.447007       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m03"
	I0803 23:41:02.497797       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.43244ms"
	I0803 23:41:02.498801       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.731µs"
	
	
	==> kube-proxy [7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31] <==
	I0803 23:37:29.333669       1 server_linux.go:69] "Using iptables proxy"
	I0803 23:37:29.380150       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.176"]
	I0803 23:37:29.462411       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 23:37:29.462515       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 23:37:29.462549       1 server_linux.go:165] "Using iptables Proxier"
	I0803 23:37:29.465966       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 23:37:29.466377       1 server.go:872] "Version info" version="v1.30.3"
	I0803 23:37:29.466409       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:37:29.468418       1 config.go:192] "Starting service config controller"
	I0803 23:37:29.468622       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 23:37:29.468675       1 config.go:101] "Starting endpoint slice config controller"
	I0803 23:37:29.468696       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 23:37:29.471427       1 config.go:319] "Starting node config controller"
	I0803 23:37:29.471503       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 23:37:29.569453       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 23:37:29.569584       1 shared_informer.go:320] Caches are synced for service config
	I0803 23:37:29.572668       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e933251cf98c19f63ba7dc3aca722cfe7bd5d85499591f135ec09da928b758fb] <==
	I0803 23:44:12.559150       1 server_linux.go:69] "Using iptables proxy"
	I0803 23:44:12.569656       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.176"]
	I0803 23:44:12.626838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 23:44:12.626895       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 23:44:12.626912       1 server_linux.go:165] "Using iptables Proxier"
	I0803 23:44:12.633063       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 23:44:12.633391       1 server.go:872] "Version info" version="v1.30.3"
	I0803 23:44:12.633421       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:44:12.635736       1 config.go:192] "Starting service config controller"
	I0803 23:44:12.635772       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 23:44:12.635814       1 config.go:101] "Starting endpoint slice config controller"
	I0803 23:44:12.635818       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 23:44:12.636192       1 config.go:319] "Starting node config controller"
	I0803 23:44:12.636197       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 23:44:12.736788       1 shared_informer.go:320] Caches are synced for node config
	I0803 23:44:12.736848       1 shared_informer.go:320] Caches are synced for service config
	I0803 23:44:12.736880       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f] <==
	E0803 23:37:10.866785       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0803 23:37:11.725284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0803 23:37:11.725398       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0803 23:37:11.752825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0803 23:37:11.753030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0803 23:37:11.755384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 23:37:11.755427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0803 23:37:11.758582       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0803 23:37:11.758657       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0803 23:37:11.809414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0803 23:37:11.809569       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0803 23:37:11.908899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 23:37:11.909045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0803 23:37:12.013349       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 23:37:12.013399       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 23:37:12.105167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:37:12.105254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0803 23:37:12.128702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 23:37:12.128856       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0803 23:37:12.154572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0803 23:37:12.154674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0803 23:37:12.295525       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:37:12.295572       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0803 23:37:14.255733       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0803 23:42:30.450051       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [3306098d465b998985517da6000dbb16bb97dd16e5202e0818c4d43c9ac33ced] <==
	I0803 23:44:08.437809       1 serving.go:380] Generated self-signed cert in-memory
	I0803 23:44:10.885941       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0803 23:44:10.886041       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:44:10.892285       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0803 23:44:10.892348       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0803 23:44:10.892402       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0803 23:44:10.892427       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0803 23:44:10.892460       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0803 23:44:10.892482       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0803 23:44:10.892952       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0803 23:44:10.893073       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0803 23:44:10.992643       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0803 23:44:10.992684       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0803 23:44:10.992805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 03 23:44:08 multinode-626202 kubelet[3133]: I0803 23:44:08.107753    3133 kubelet_node_status.go:73] "Attempting to register node" node="multinode-626202"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.009152    3133 kubelet_node_status.go:112] "Node was previously registered" node="multinode-626202"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.009767    3133 kubelet_node_status.go:76] "Successfully registered node" node="multinode-626202"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.011473    3133 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.012868    3133 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: E0803 23:44:11.566843    3133 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-626202\" already exists" pod="kube-system/kube-apiserver-multinode-626202"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.580262    3133 apiserver.go:52] "Watching apiserver"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.585493    3133 topology_manager.go:215] "Topology Admit Handler" podUID="a7697739-ef34-41a5-b70f-3f49e921a47c" podNamespace="kube-system" podName="kindnet-jhldg"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.585641    3133 topology_manager.go:215] "Topology Admit Handler" podUID="c55ec034-631a-42e1-acb6-43ee3f34bbfc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-29fhz"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.585690    3133 topology_manager.go:215] "Topology Admit Handler" podUID="5f3a35b2-712a-4122-a882-20045d9785bf" podNamespace="kube-system" podName="kube-proxy-26jcw"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.585780    3133 topology_manager.go:215] "Topology Admit Handler" podUID="e487f793-a9e4-4a2b-a25a-b474c986a645" podNamespace="kube-system" podName="storage-provisioner"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.585844    3133 topology_manager.go:215] "Topology Admit Handler" podUID="eb88a599-41f0-473d-bc71-5a243ed5cd94" podNamespace="default" podName="busybox-fc5497c4f-lj84f"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.595653    3133 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.615255    3133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7697739-ef34-41a5-b70f-3f49e921a47c-xtables-lock\") pod \"kindnet-jhldg\" (UID: \"a7697739-ef34-41a5-b70f-3f49e921a47c\") " pod="kube-system/kindnet-jhldg"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.615360    3133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f3a35b2-712a-4122-a882-20045d9785bf-xtables-lock\") pod \"kube-proxy-26jcw\" (UID: \"5f3a35b2-712a-4122-a882-20045d9785bf\") " pod="kube-system/kube-proxy-26jcw"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.615377    3133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f3a35b2-712a-4122-a882-20045d9785bf-lib-modules\") pod \"kube-proxy-26jcw\" (UID: \"5f3a35b2-712a-4122-a882-20045d9785bf\") " pod="kube-system/kube-proxy-26jcw"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.615437    3133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e487f793-a9e4-4a2b-a25a-b474c986a645-tmp\") pod \"storage-provisioner\" (UID: \"e487f793-a9e4-4a2b-a25a-b474c986a645\") " pod="kube-system/storage-provisioner"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.615465    3133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a7697739-ef34-41a5-b70f-3f49e921a47c-cni-cfg\") pod \"kindnet-jhldg\" (UID: \"a7697739-ef34-41a5-b70f-3f49e921a47c\") " pod="kube-system/kindnet-jhldg"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.615479    3133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7697739-ef34-41a5-b70f-3f49e921a47c-lib-modules\") pod \"kindnet-jhldg\" (UID: \"a7697739-ef34-41a5-b70f-3f49e921a47c\") " pod="kube-system/kindnet-jhldg"
	Aug 03 23:44:15 multinode-626202 kubelet[3133]: I0803 23:44:15.405093    3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 03 23:45:06 multinode-626202 kubelet[3133]: E0803 23:45:06.674206    3133 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:45:06 multinode-626202 kubelet[3133]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:45:06 multinode-626202 kubelet[3133]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:45:06 multinode-626202 kubelet[3133]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:45:06 multinode-626202 kubelet[3133]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 23:45:50.035887   48190 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19364-9607/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-626202 -n multinode-626202
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-626202 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (324.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 stop
E0803 23:45:58.007576   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-626202 stop: exit status 82 (2m0.47141777s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-626202-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-626202 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-626202 status: exit status 3 (18.631726833s)

                                                
                                                
-- stdout --
	multinode-626202
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-626202-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 23:48:13.217666   48839 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	E0803 23:48:13.217703   48839 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-626202 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-626202 -n multinode-626202
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-626202 logs -n 25: (1.468486672s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp multinode-626202-m02:/home/docker/cp-test.txt                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202:/home/docker/cp-test_multinode-626202-m02_multinode-626202.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n multinode-626202 sudo cat                                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /home/docker/cp-test_multinode-626202-m02_multinode-626202.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp multinode-626202-m02:/home/docker/cp-test.txt                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03:/home/docker/cp-test_multinode-626202-m02_multinode-626202-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n multinode-626202-m03 sudo cat                                   | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /home/docker/cp-test_multinode-626202-m02_multinode-626202-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp testdata/cp-test.txt                                                | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp multinode-626202-m03:/home/docker/cp-test.txt                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile807028884/001/cp-test_multinode-626202-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp multinode-626202-m03:/home/docker/cp-test.txt                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202:/home/docker/cp-test_multinode-626202-m03_multinode-626202.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n multinode-626202 sudo cat                                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /home/docker/cp-test_multinode-626202-m03_multinode-626202.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-626202 cp multinode-626202-m03:/home/docker/cp-test.txt                       | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m02:/home/docker/cp-test_multinode-626202-m03_multinode-626202-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n multinode-626202-m02 sudo cat                                   | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /home/docker/cp-test_multinode-626202-m03_multinode-626202-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-626202 node stop m03                                                          | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	| node    | multinode-626202 node start                                                             | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:40 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-626202                                                                | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:40 UTC |                     |
	| stop    | -p multinode-626202                                                                     | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:40 UTC |                     |
	| start   | -p multinode-626202                                                                     | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:42 UTC | 03 Aug 24 23:45 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-626202                                                                | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:45 UTC |                     |
	| node    | multinode-626202 node delete                                                            | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:45 UTC | 03 Aug 24 23:45 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-626202 stop                                                                   | multinode-626202 | jenkins | v1.33.1 | 03 Aug 24 23:45 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:42:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:42:29.599266   47076 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:42:29.599380   47076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:42:29.599388   47076 out.go:304] Setting ErrFile to fd 2...
	I0803 23:42:29.599392   47076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:42:29.599604   47076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:42:29.600133   47076 out.go:298] Setting JSON to false
	I0803 23:42:29.600998   47076 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5094,"bootTime":1722723456,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:42:29.601058   47076 start.go:139] virtualization: kvm guest
	I0803 23:42:29.603308   47076 out.go:177] * [multinode-626202] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:42:29.604754   47076 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:42:29.604753   47076 notify.go:220] Checking for updates...
	I0803 23:42:29.606469   47076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:42:29.607959   47076 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:42:29.609229   47076 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:42:29.610595   47076 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:42:29.611892   47076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:42:29.613701   47076 config.go:182] Loaded profile config "multinode-626202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:42:29.613793   47076 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:42:29.614214   47076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:42:29.614258   47076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:42:29.629457   47076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37337
	I0803 23:42:29.629864   47076 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:42:29.630348   47076 main.go:141] libmachine: Using API Version  1
	I0803 23:42:29.630371   47076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:42:29.630750   47076 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:42:29.630921   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:42:29.667533   47076 out.go:177] * Using the kvm2 driver based on existing profile
	I0803 23:42:29.669044   47076 start.go:297] selected driver: kvm2
	I0803 23:42:29.669062   47076 start.go:901] validating driver "kvm2" against &{Name:multinode-626202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-626202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:42:29.669232   47076 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:42:29.669625   47076 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:42:29.669696   47076 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:42:29.685554   47076 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:42:29.686304   47076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:42:29.686334   47076 cni.go:84] Creating CNI manager for ""
	I0803 23:42:29.686342   47076 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0803 23:42:29.686391   47076 start.go:340] cluster config:
	{Name:multinode-626202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-626202 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:42:29.686508   47076 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:42:29.688547   47076 out.go:177] * Starting "multinode-626202" primary control-plane node in "multinode-626202" cluster
	I0803 23:42:29.689938   47076 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:42:29.689970   47076 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:42:29.689977   47076 cache.go:56] Caching tarball of preloaded images
	I0803 23:42:29.690048   47076 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:42:29.690061   47076 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:42:29.690187   47076 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/config.json ...
	I0803 23:42:29.690377   47076 start.go:360] acquireMachinesLock for multinode-626202: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:42:29.690418   47076 start.go:364] duration metric: took 22.915µs to acquireMachinesLock for "multinode-626202"
	I0803 23:42:29.690428   47076 start.go:96] Skipping create...Using existing machine configuration
	I0803 23:42:29.690436   47076 fix.go:54] fixHost starting: 
	I0803 23:42:29.690675   47076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:42:29.690705   47076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:42:29.705567   47076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36317
	I0803 23:42:29.706067   47076 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:42:29.706604   47076 main.go:141] libmachine: Using API Version  1
	I0803 23:42:29.706633   47076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:42:29.706932   47076 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:42:29.707135   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:42:29.707272   47076 main.go:141] libmachine: (multinode-626202) Calling .GetState
	I0803 23:42:29.708855   47076 fix.go:112] recreateIfNeeded on multinode-626202: state=Running err=<nil>
	W0803 23:42:29.708872   47076 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 23:42:29.711115   47076 out.go:177] * Updating the running kvm2 "multinode-626202" VM ...
	I0803 23:42:29.712463   47076 machine.go:94] provisionDockerMachine start ...
	I0803 23:42:29.712484   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:42:29.712715   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:42:29.715173   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:29.715682   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:29.715709   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:29.715858   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:42:29.716034   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:29.716217   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:29.716368   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:42:29.716543   47076 main.go:141] libmachine: Using SSH client type: native
	I0803 23:42:29.716741   47076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0803 23:42:29.716752   47076 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 23:42:29.834840   47076 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-626202
	
	I0803 23:42:29.834864   47076 main.go:141] libmachine: (multinode-626202) Calling .GetMachineName
	I0803 23:42:29.835093   47076 buildroot.go:166] provisioning hostname "multinode-626202"
	I0803 23:42:29.835118   47076 main.go:141] libmachine: (multinode-626202) Calling .GetMachineName
	I0803 23:42:29.835290   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:42:29.837753   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:29.838130   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:29.838150   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:29.838286   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:42:29.838497   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:29.838677   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:29.838899   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:42:29.839083   47076 main.go:141] libmachine: Using SSH client type: native
	I0803 23:42:29.839267   47076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0803 23:42:29.839279   47076 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-626202 && echo "multinode-626202" | sudo tee /etc/hostname
	I0803 23:42:29.967206   47076 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-626202
	
	I0803 23:42:29.967233   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:42:29.970125   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:29.970529   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:29.970575   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:29.970689   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:42:29.970884   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:29.971079   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:29.971247   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:42:29.971439   47076 main.go:141] libmachine: Using SSH client type: native
	I0803 23:42:29.971634   47076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0803 23:42:29.971653   47076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-626202' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-626202/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-626202' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:42:30.083028   47076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:42:30.083062   47076 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 23:42:30.083080   47076 buildroot.go:174] setting up certificates
	I0803 23:42:30.083087   47076 provision.go:84] configureAuth start
	I0803 23:42:30.083096   47076 main.go:141] libmachine: (multinode-626202) Calling .GetMachineName
	I0803 23:42:30.083336   47076 main.go:141] libmachine: (multinode-626202) Calling .GetIP
	I0803 23:42:30.086051   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.086434   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:30.086455   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.086595   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:42:30.089335   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.089687   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:30.089719   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.089872   47076 provision.go:143] copyHostCerts
	I0803 23:42:30.089903   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:42:30.089943   47076 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0803 23:42:30.089954   47076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:42:30.090038   47076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 23:42:30.090171   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:42:30.090196   47076 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0803 23:42:30.090203   47076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:42:30.090245   47076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 23:42:30.090331   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:42:30.090355   47076 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0803 23:42:30.090362   47076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:42:30.090397   47076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 23:42:30.090481   47076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.multinode-626202 san=[127.0.0.1 192.168.39.176 localhost minikube multinode-626202]
	I0803 23:42:30.153747   47076 provision.go:177] copyRemoteCerts
	I0803 23:42:30.153825   47076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:42:30.153855   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:42:30.156805   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.157234   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:30.157263   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.157547   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:42:30.157760   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:30.157976   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:42:30.158157   47076 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/multinode-626202/id_rsa Username:docker}
	I0803 23:42:30.244389   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:42:30.244466   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 23:42:30.270538   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:42:30.270620   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0803 23:42:30.296437   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:42:30.296510   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:42:30.321268   47076 provision.go:87] duration metric: took 238.16797ms to configureAuth
	I0803 23:42:30.321297   47076 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:42:30.321544   47076 config.go:182] Loaded profile config "multinode-626202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:42:30.321631   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:42:30.324134   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.324496   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:42:30.324523   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:42:30.324714   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:42:30.324897   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:30.325078   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:42:30.325202   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:42:30.325335   47076 main.go:141] libmachine: Using SSH client type: native
	I0803 23:42:30.325546   47076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0803 23:42:30.325568   47076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:44:01.064406   47076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:44:01.064436   47076 machine.go:97] duration metric: took 1m31.351959949s to provisionDockerMachine
	I0803 23:44:01.064449   47076 start.go:293] postStartSetup for "multinode-626202" (driver="kvm2")
	I0803 23:44:01.064463   47076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:44:01.064506   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:44:01.064837   47076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:44:01.064872   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:44:01.067981   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.068367   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:44:01.068392   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.068513   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:44:01.068676   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:44:01.068822   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:44:01.068971   47076 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/multinode-626202/id_rsa Username:docker}
	I0803 23:44:01.158407   47076 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:44:01.162894   47076 command_runner.go:130] > NAME=Buildroot
	I0803 23:44:01.162915   47076 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0803 23:44:01.162920   47076 command_runner.go:130] > ID=buildroot
	I0803 23:44:01.162924   47076 command_runner.go:130] > VERSION_ID=2023.02.9
	I0803 23:44:01.162929   47076 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0803 23:44:01.163185   47076 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:44:01.163210   47076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 23:44:01.163269   47076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 23:44:01.163353   47076 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0803 23:44:01.163365   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /etc/ssl/certs/167952.pem
	I0803 23:44:01.163459   47076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:44:01.174092   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:44:01.200238   47076 start.go:296] duration metric: took 135.774196ms for postStartSetup
	I0803 23:44:01.200286   47076 fix.go:56] duration metric: took 1m31.509849668s for fixHost
	I0803 23:44:01.200311   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:44:01.203027   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.203359   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:44:01.203378   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.203569   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:44:01.203828   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:44:01.204018   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:44:01.204156   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:44:01.204328   47076 main.go:141] libmachine: Using SSH client type: native
	I0803 23:44:01.204488   47076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0803 23:44:01.204498   47076 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:44:01.318137   47076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722728641.294100479
	
	I0803 23:44:01.318161   47076 fix.go:216] guest clock: 1722728641.294100479
	I0803 23:44:01.318169   47076 fix.go:229] Guest: 2024-08-03 23:44:01.294100479 +0000 UTC Remote: 2024-08-03 23:44:01.200292217 +0000 UTC m=+91.636762816 (delta=93.808262ms)
	I0803 23:44:01.318187   47076 fix.go:200] guest clock delta is within tolerance: 93.808262ms
	I0803 23:44:01.318192   47076 start.go:83] releasing machines lock for "multinode-626202", held for 1m31.627769129s
	I0803 23:44:01.318233   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:44:01.318484   47076 main.go:141] libmachine: (multinode-626202) Calling .GetIP
	I0803 23:44:01.321471   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.321859   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:44:01.321889   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.322039   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:44:01.322590   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:44:01.322754   47076 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:44:01.322817   47076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:44:01.322858   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:44:01.323084   47076 ssh_runner.go:195] Run: cat /version.json
	I0803 23:44:01.323102   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:44:01.325750   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.325799   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.326219   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:44:01.326248   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.326280   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:44:01.326297   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:01.326445   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:44:01.326445   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:44:01.326667   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:44:01.326677   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:44:01.326806   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:44:01.326872   47076 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:44:01.326980   47076 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/multinode-626202/id_rsa Username:docker}
	I0803 23:44:01.327044   47076 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/multinode-626202/id_rsa Username:docker}
	I0803 23:44:01.426700   47076 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0803 23:44:01.426756   47076 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0803 23:44:01.426913   47076 ssh_runner.go:195] Run: systemctl --version
	I0803 23:44:01.433218   47076 command_runner.go:130] > systemd 252 (252)
	I0803 23:44:01.433265   47076 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0803 23:44:01.433346   47076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:44:01.600612   47076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0803 23:44:01.609598   47076 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0803 23:44:01.609741   47076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:44:01.609795   47076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:44:01.620080   47076 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0803 23:44:01.620109   47076 start.go:495] detecting cgroup driver to use...
	I0803 23:44:01.620174   47076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:44:01.637674   47076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:44:01.654770   47076 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:44:01.654840   47076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:44:01.670835   47076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:44:01.685600   47076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:44:01.840423   47076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:44:01.976931   47076 docker.go:233] disabling docker service ...
	I0803 23:44:01.976998   47076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:44:01.994460   47076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:44:02.008469   47076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:44:02.145921   47076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:44:02.284815   47076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:44:02.299416   47076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:44:02.318434   47076 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0803 23:44:02.318475   47076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:44:02.318545   47076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.329377   47076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:44:02.329456   47076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.340535   47076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.351868   47076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.363222   47076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:44:02.374615   47076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.385467   47076 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.396736   47076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:44:02.407323   47076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:44:02.417651   47076 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0803 23:44:02.417737   47076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:44:02.427387   47076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:44:02.560402   47076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:44:04.009028   47076 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.448588877s)
	I0803 23:44:04.009064   47076 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:44:04.009115   47076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:44:04.014228   47076 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0803 23:44:04.014252   47076 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0803 23:44:04.014261   47076 command_runner.go:130] > Device: 0,22	Inode: 1341        Links: 1
	I0803 23:44:04.014272   47076 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0803 23:44:04.014279   47076 command_runner.go:130] > Access: 2024-08-03 23:44:03.869013992 +0000
	I0803 23:44:04.014290   47076 command_runner.go:130] > Modify: 2024-08-03 23:44:03.869013992 +0000
	I0803 23:44:04.014302   47076 command_runner.go:130] > Change: 2024-08-03 23:44:03.869013992 +0000
	I0803 23:44:04.014311   47076 command_runner.go:130] >  Birth: -
	I0803 23:44:04.014334   47076 start.go:563] Will wait 60s for crictl version
	I0803 23:44:04.014374   47076 ssh_runner.go:195] Run: which crictl
	I0803 23:44:04.018167   47076 command_runner.go:130] > /usr/bin/crictl
	I0803 23:44:04.018236   47076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:44:04.055070   47076 command_runner.go:130] > Version:  0.1.0
	I0803 23:44:04.055094   47076 command_runner.go:130] > RuntimeName:  cri-o
	I0803 23:44:04.055101   47076 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0803 23:44:04.055109   47076 command_runner.go:130] > RuntimeApiVersion:  v1
	I0803 23:44:04.056294   47076 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:44:04.056362   47076 ssh_runner.go:195] Run: crio --version
	I0803 23:44:04.085950   47076 command_runner.go:130] > crio version 1.29.1
	I0803 23:44:04.085970   47076 command_runner.go:130] > Version:        1.29.1
	I0803 23:44:04.085977   47076 command_runner.go:130] > GitCommit:      unknown
	I0803 23:44:04.085980   47076 command_runner.go:130] > GitCommitDate:  unknown
	I0803 23:44:04.085985   47076 command_runner.go:130] > GitTreeState:   clean
	I0803 23:44:04.085998   47076 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0803 23:44:04.086014   47076 command_runner.go:130] > GoVersion:      go1.21.6
	I0803 23:44:04.086021   47076 command_runner.go:130] > Compiler:       gc
	I0803 23:44:04.086028   47076 command_runner.go:130] > Platform:       linux/amd64
	I0803 23:44:04.086048   47076 command_runner.go:130] > Linkmode:       dynamic
	I0803 23:44:04.086057   47076 command_runner.go:130] > BuildTags:      
	I0803 23:44:04.086064   47076 command_runner.go:130] >   containers_image_ostree_stub
	I0803 23:44:04.086074   47076 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0803 23:44:04.086078   47076 command_runner.go:130] >   btrfs_noversion
	I0803 23:44:04.086083   47076 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0803 23:44:04.086087   47076 command_runner.go:130] >   libdm_no_deferred_remove
	I0803 23:44:04.086090   47076 command_runner.go:130] >   seccomp
	I0803 23:44:04.086095   47076 command_runner.go:130] > LDFlags:          unknown
	I0803 23:44:04.086099   47076 command_runner.go:130] > SeccompEnabled:   true
	I0803 23:44:04.086103   47076 command_runner.go:130] > AppArmorEnabled:  false
	I0803 23:44:04.086194   47076 ssh_runner.go:195] Run: crio --version
	I0803 23:44:04.116741   47076 command_runner.go:130] > crio version 1.29.1
	I0803 23:44:04.116764   47076 command_runner.go:130] > Version:        1.29.1
	I0803 23:44:04.116770   47076 command_runner.go:130] > GitCommit:      unknown
	I0803 23:44:04.116776   47076 command_runner.go:130] > GitCommitDate:  unknown
	I0803 23:44:04.116782   47076 command_runner.go:130] > GitTreeState:   clean
	I0803 23:44:04.116790   47076 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0803 23:44:04.116796   47076 command_runner.go:130] > GoVersion:      go1.21.6
	I0803 23:44:04.116802   47076 command_runner.go:130] > Compiler:       gc
	I0803 23:44:04.116808   47076 command_runner.go:130] > Platform:       linux/amd64
	I0803 23:44:04.116813   47076 command_runner.go:130] > Linkmode:       dynamic
	I0803 23:44:04.116819   47076 command_runner.go:130] > BuildTags:      
	I0803 23:44:04.116849   47076 command_runner.go:130] >   containers_image_ostree_stub
	I0803 23:44:04.116858   47076 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0803 23:44:04.116865   47076 command_runner.go:130] >   btrfs_noversion
	I0803 23:44:04.116872   47076 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0803 23:44:04.116882   47076 command_runner.go:130] >   libdm_no_deferred_remove
	I0803 23:44:04.116888   47076 command_runner.go:130] >   seccomp
	I0803 23:44:04.116898   47076 command_runner.go:130] > LDFlags:          unknown
	I0803 23:44:04.116905   47076 command_runner.go:130] > SeccompEnabled:   true
	I0803 23:44:04.116922   47076 command_runner.go:130] > AppArmorEnabled:  false
	I0803 23:44:04.119755   47076 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:44:04.120987   47076 main.go:141] libmachine: (multinode-626202) Calling .GetIP
	I0803 23:44:04.123667   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:04.123996   47076 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:44:04.124026   47076 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:44:04.124192   47076 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:44:04.128502   47076 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0803 23:44:04.128778   47076 kubeadm.go:883] updating cluster {Name:multinode-626202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-626202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:44:04.129036   47076 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:44:04.129097   47076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:44:04.175426   47076 command_runner.go:130] > {
	I0803 23:44:04.175453   47076 command_runner.go:130] >   "images": [
	I0803 23:44:04.175458   47076 command_runner.go:130] >     {
	I0803 23:44:04.175466   47076 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0803 23:44:04.175470   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.175477   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0803 23:44:04.175480   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175484   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.175492   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0803 23:44:04.175499   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0803 23:44:04.175502   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175508   47076 command_runner.go:130] >       "size": "87165492",
	I0803 23:44:04.175515   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.175522   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.175536   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.175546   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.175551   47076 command_runner.go:130] >     },
	I0803 23:44:04.175555   47076 command_runner.go:130] >     {
	I0803 23:44:04.175561   47076 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0803 23:44:04.175565   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.175570   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0803 23:44:04.175574   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175578   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.175587   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0803 23:44:04.175597   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0803 23:44:04.175605   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175612   47076 command_runner.go:130] >       "size": "87174707",
	I0803 23:44:04.175621   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.175632   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.175641   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.175648   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.175656   47076 command_runner.go:130] >     },
	I0803 23:44:04.175660   47076 command_runner.go:130] >     {
	I0803 23:44:04.175675   47076 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0803 23:44:04.175683   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.175694   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0803 23:44:04.175704   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175714   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.175725   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0803 23:44:04.175735   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0803 23:44:04.175741   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175748   47076 command_runner.go:130] >       "size": "1363676",
	I0803 23:44:04.175753   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.175761   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.175766   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.175775   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.175781   47076 command_runner.go:130] >     },
	I0803 23:44:04.175790   47076 command_runner.go:130] >     {
	I0803 23:44:04.175802   47076 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0803 23:44:04.175809   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.175820   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0803 23:44:04.175829   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175837   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.175848   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0803 23:44:04.175875   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0803 23:44:04.175884   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175891   47076 command_runner.go:130] >       "size": "31470524",
	I0803 23:44:04.175898   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.175904   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.175913   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.175922   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.175930   47076 command_runner.go:130] >     },
	I0803 23:44:04.175935   47076 command_runner.go:130] >     {
	I0803 23:44:04.175946   47076 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0803 23:44:04.175955   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.175967   47076 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0803 23:44:04.175976   47076 command_runner.go:130] >       ],
	I0803 23:44:04.175983   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.175998   47076 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0803 23:44:04.176018   47076 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0803 23:44:04.176024   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176029   47076 command_runner.go:130] >       "size": "61245718",
	I0803 23:44:04.176036   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.176046   47076 command_runner.go:130] >       "username": "nonroot",
	I0803 23:44:04.176052   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176061   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.176066   47076 command_runner.go:130] >     },
	I0803 23:44:04.176075   47076 command_runner.go:130] >     {
	I0803 23:44:04.176084   47076 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0803 23:44:04.176093   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.176101   47076 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0803 23:44:04.176107   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176111   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.176130   47076 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0803 23:44:04.176144   47076 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0803 23:44:04.176155   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176162   47076 command_runner.go:130] >       "size": "150779692",
	I0803 23:44:04.176171   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.176181   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.176189   47076 command_runner.go:130] >       },
	I0803 23:44:04.176196   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.176201   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176210   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.176216   47076 command_runner.go:130] >     },
	I0803 23:44:04.176225   47076 command_runner.go:130] >     {
	I0803 23:44:04.176235   47076 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0803 23:44:04.176243   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.176252   47076 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0803 23:44:04.176260   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176267   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.176278   47076 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0803 23:44:04.176288   47076 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0803 23:44:04.176297   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176304   47076 command_runner.go:130] >       "size": "117609954",
	I0803 23:44:04.176313   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.176326   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.176334   47076 command_runner.go:130] >       },
	I0803 23:44:04.176341   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.176350   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176359   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.176365   47076 command_runner.go:130] >     },
	I0803 23:44:04.176368   47076 command_runner.go:130] >     {
	I0803 23:44:04.176380   47076 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0803 23:44:04.176390   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.176402   47076 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0803 23:44:04.176410   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176420   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.176448   47076 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0803 23:44:04.176460   47076 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0803 23:44:04.176468   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176479   47076 command_runner.go:130] >       "size": "112198984",
	I0803 23:44:04.176488   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.176495   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.176503   47076 command_runner.go:130] >       },
	I0803 23:44:04.176510   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.176516   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176522   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.176527   47076 command_runner.go:130] >     },
	I0803 23:44:04.176538   47076 command_runner.go:130] >     {
	I0803 23:44:04.176547   47076 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0803 23:44:04.176553   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.176559   47076 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0803 23:44:04.176564   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176570   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.176583   47076 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0803 23:44:04.176593   47076 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0803 23:44:04.176598   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176613   47076 command_runner.go:130] >       "size": "85953945",
	I0803 23:44:04.176622   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.176628   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.176637   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176649   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.176657   47076 command_runner.go:130] >     },
	I0803 23:44:04.176662   47076 command_runner.go:130] >     {
	I0803 23:44:04.176671   47076 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0803 23:44:04.176681   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.176689   47076 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0803 23:44:04.176697   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176703   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.176717   47076 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0803 23:44:04.176731   47076 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0803 23:44:04.176737   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176742   47076 command_runner.go:130] >       "size": "63051080",
	I0803 23:44:04.176746   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.176750   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.176755   47076 command_runner.go:130] >       },
	I0803 23:44:04.176759   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.176763   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176768   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.176772   47076 command_runner.go:130] >     },
	I0803 23:44:04.176775   47076 command_runner.go:130] >     {
	I0803 23:44:04.176781   47076 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0803 23:44:04.176786   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.176790   47076 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0803 23:44:04.176793   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176798   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.176804   47076 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0803 23:44:04.176810   47076 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0803 23:44:04.176814   47076 command_runner.go:130] >       ],
	I0803 23:44:04.176817   47076 command_runner.go:130] >       "size": "750414",
	I0803 23:44:04.176821   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.176825   47076 command_runner.go:130] >         "value": "65535"
	I0803 23:44:04.176829   47076 command_runner.go:130] >       },
	I0803 23:44:04.176833   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.176838   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.176842   47076 command_runner.go:130] >       "pinned": true
	I0803 23:44:04.176845   47076 command_runner.go:130] >     }
	I0803 23:44:04.176853   47076 command_runner.go:130] >   ]
	I0803 23:44:04.176859   47076 command_runner.go:130] > }
	I0803 23:44:04.177062   47076 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:44:04.177077   47076 crio.go:433] Images already preloaded, skipping extraction
	I0803 23:44:04.177126   47076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:44:04.211002   47076 command_runner.go:130] > {
	I0803 23:44:04.211030   47076 command_runner.go:130] >   "images": [
	I0803 23:44:04.211036   47076 command_runner.go:130] >     {
	I0803 23:44:04.211049   47076 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0803 23:44:04.211056   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211067   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0803 23:44:04.211073   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211080   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211093   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0803 23:44:04.211107   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0803 23:44:04.211115   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211120   47076 command_runner.go:130] >       "size": "87165492",
	I0803 23:44:04.211126   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.211130   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211137   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211148   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211152   47076 command_runner.go:130] >     },
	I0803 23:44:04.211157   47076 command_runner.go:130] >     {
	I0803 23:44:04.211163   47076 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0803 23:44:04.211168   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211174   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0803 23:44:04.211180   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211184   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211193   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0803 23:44:04.211200   47076 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0803 23:44:04.211205   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211209   47076 command_runner.go:130] >       "size": "87174707",
	I0803 23:44:04.211213   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.211227   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211233   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211242   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211248   47076 command_runner.go:130] >     },
	I0803 23:44:04.211252   47076 command_runner.go:130] >     {
	I0803 23:44:04.211260   47076 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0803 23:44:04.211267   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211273   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0803 23:44:04.211279   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211282   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211289   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0803 23:44:04.211298   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0803 23:44:04.211301   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211305   47076 command_runner.go:130] >       "size": "1363676",
	I0803 23:44:04.211309   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.211313   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211319   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211325   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211329   47076 command_runner.go:130] >     },
	I0803 23:44:04.211334   47076 command_runner.go:130] >     {
	I0803 23:44:04.211340   47076 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0803 23:44:04.211346   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211351   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0803 23:44:04.211357   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211361   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211370   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0803 23:44:04.211386   47076 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0803 23:44:04.211392   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211396   47076 command_runner.go:130] >       "size": "31470524",
	I0803 23:44:04.211400   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.211404   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211410   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211413   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211419   47076 command_runner.go:130] >     },
	I0803 23:44:04.211423   47076 command_runner.go:130] >     {
	I0803 23:44:04.211429   47076 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0803 23:44:04.211434   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211439   47076 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0803 23:44:04.211450   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211460   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211471   47076 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0803 23:44:04.211487   47076 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0803 23:44:04.211496   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211501   47076 command_runner.go:130] >       "size": "61245718",
	I0803 23:44:04.211504   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.211508   47076 command_runner.go:130] >       "username": "nonroot",
	I0803 23:44:04.211512   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211516   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211519   47076 command_runner.go:130] >     },
	I0803 23:44:04.211522   47076 command_runner.go:130] >     {
	I0803 23:44:04.211528   47076 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0803 23:44:04.211535   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211540   47076 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0803 23:44:04.211545   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211549   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211556   47076 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0803 23:44:04.211563   47076 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0803 23:44:04.211566   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211571   47076 command_runner.go:130] >       "size": "150779692",
	I0803 23:44:04.211576   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.211580   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.211586   47076 command_runner.go:130] >       },
	I0803 23:44:04.211592   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211596   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211600   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211603   47076 command_runner.go:130] >     },
	I0803 23:44:04.211607   47076 command_runner.go:130] >     {
	I0803 23:44:04.211613   47076 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0803 23:44:04.211619   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211624   47076 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0803 23:44:04.211629   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211633   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211641   47076 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0803 23:44:04.211650   47076 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0803 23:44:04.211660   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211667   47076 command_runner.go:130] >       "size": "117609954",
	I0803 23:44:04.211670   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.211676   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.211679   47076 command_runner.go:130] >       },
	I0803 23:44:04.211685   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211689   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211694   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211697   47076 command_runner.go:130] >     },
	I0803 23:44:04.211700   47076 command_runner.go:130] >     {
	I0803 23:44:04.211706   47076 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0803 23:44:04.211712   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211717   47076 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0803 23:44:04.211722   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211726   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211745   47076 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0803 23:44:04.211754   47076 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0803 23:44:04.211758   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211762   47076 command_runner.go:130] >       "size": "112198984",
	I0803 23:44:04.211770   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.211776   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.211784   47076 command_runner.go:130] >       },
	I0803 23:44:04.211790   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211799   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211805   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211812   47076 command_runner.go:130] >     },
	I0803 23:44:04.211817   47076 command_runner.go:130] >     {
	I0803 23:44:04.211830   47076 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0803 23:44:04.211836   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211844   47076 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0803 23:44:04.211848   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211853   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211863   47076 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0803 23:44:04.211881   47076 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0803 23:44:04.211890   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211897   47076 command_runner.go:130] >       "size": "85953945",
	I0803 23:44:04.211913   47076 command_runner.go:130] >       "uid": null,
	I0803 23:44:04.211923   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.211929   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.211935   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.211940   47076 command_runner.go:130] >     },
	I0803 23:44:04.211946   47076 command_runner.go:130] >     {
	I0803 23:44:04.211955   47076 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0803 23:44:04.211964   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.211971   47076 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0803 23:44:04.211979   47076 command_runner.go:130] >       ],
	I0803 23:44:04.211984   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.211998   47076 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0803 23:44:04.212012   47076 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0803 23:44:04.212020   47076 command_runner.go:130] >       ],
	I0803 23:44:04.212026   47076 command_runner.go:130] >       "size": "63051080",
	I0803 23:44:04.212034   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.212039   47076 command_runner.go:130] >         "value": "0"
	I0803 23:44:04.212047   47076 command_runner.go:130] >       },
	I0803 23:44:04.212053   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.212062   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.212068   47076 command_runner.go:130] >       "pinned": false
	I0803 23:44:04.212076   47076 command_runner.go:130] >     },
	I0803 23:44:04.212081   47076 command_runner.go:130] >     {
	I0803 23:44:04.212093   47076 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0803 23:44:04.212099   47076 command_runner.go:130] >       "repoTags": [
	I0803 23:44:04.212107   47076 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0803 23:44:04.212111   47076 command_runner.go:130] >       ],
	I0803 23:44:04.212119   47076 command_runner.go:130] >       "repoDigests": [
	I0803 23:44:04.212130   47076 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0803 23:44:04.212150   47076 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0803 23:44:04.212158   47076 command_runner.go:130] >       ],
	I0803 23:44:04.212165   47076 command_runner.go:130] >       "size": "750414",
	I0803 23:44:04.212173   47076 command_runner.go:130] >       "uid": {
	I0803 23:44:04.212190   47076 command_runner.go:130] >         "value": "65535"
	I0803 23:44:04.212196   47076 command_runner.go:130] >       },
	I0803 23:44:04.212200   47076 command_runner.go:130] >       "username": "",
	I0803 23:44:04.212213   47076 command_runner.go:130] >       "spec": null,
	I0803 23:44:04.212219   47076 command_runner.go:130] >       "pinned": true
	I0803 23:44:04.212222   47076 command_runner.go:130] >     }
	I0803 23:44:04.212225   47076 command_runner.go:130] >   ]
	I0803 23:44:04.212229   47076 command_runner.go:130] > }
	I0803 23:44:04.212447   47076 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:44:04.212471   47076 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:44:04.212480   47076 kubeadm.go:934] updating node { 192.168.39.176 8443 v1.30.3 crio true true} ...
	I0803 23:44:04.212688   47076 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-626202 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-626202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:44:04.212800   47076 ssh_runner.go:195] Run: crio config
	I0803 23:44:04.259338   47076 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0803 23:44:04.259372   47076 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0803 23:44:04.259384   47076 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0803 23:44:04.259389   47076 command_runner.go:130] > #
	I0803 23:44:04.259400   47076 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0803 23:44:04.259409   47076 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0803 23:44:04.259419   47076 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0803 23:44:04.259444   47076 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0803 23:44:04.259453   47076 command_runner.go:130] > # reload'.
	I0803 23:44:04.259461   47076 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0803 23:44:04.259468   47076 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0803 23:44:04.259476   47076 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0803 23:44:04.259484   47076 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0803 23:44:04.259492   47076 command_runner.go:130] > [crio]
	I0803 23:44:04.259502   47076 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0803 23:44:04.259512   47076 command_runner.go:130] > # containers images, in this directory.
	I0803 23:44:04.259546   47076 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0803 23:44:04.259585   47076 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0803 23:44:04.259599   47076 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0803 23:44:04.259615   47076 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0803 23:44:04.259853   47076 command_runner.go:130] > # imagestore = ""
	I0803 23:44:04.259868   47076 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0803 23:44:04.259877   47076 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0803 23:44:04.260062   47076 command_runner.go:130] > storage_driver = "overlay"
	I0803 23:44:04.260090   47076 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0803 23:44:04.260100   47076 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0803 23:44:04.260109   47076 command_runner.go:130] > storage_option = [
	I0803 23:44:04.260196   47076 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0803 23:44:04.260234   47076 command_runner.go:130] > ]
	I0803 23:44:04.260250   47076 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0803 23:44:04.260268   47076 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0803 23:44:04.260551   47076 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0803 23:44:04.260565   47076 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0803 23:44:04.260574   47076 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0803 23:44:04.260582   47076 command_runner.go:130] > # always happen on a node reboot
	I0803 23:44:04.260901   47076 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0803 23:44:04.260932   47076 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0803 23:44:04.260946   47076 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0803 23:44:04.260954   47076 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0803 23:44:04.261153   47076 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0803 23:44:04.261173   47076 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0803 23:44:04.261185   47076 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0803 23:44:04.261387   47076 command_runner.go:130] > # internal_wipe = true
	I0803 23:44:04.261410   47076 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0803 23:44:04.261419   47076 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0803 23:44:04.261678   47076 command_runner.go:130] > # internal_repair = false
	I0803 23:44:04.261690   47076 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0803 23:44:04.261700   47076 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0803 23:44:04.261710   47076 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0803 23:44:04.262169   47076 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0803 23:44:04.262182   47076 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0803 23:44:04.262188   47076 command_runner.go:130] > [crio.api]
	I0803 23:44:04.262195   47076 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0803 23:44:04.262460   47076 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0803 23:44:04.262473   47076 command_runner.go:130] > # IP address on which the stream server will listen.
	I0803 23:44:04.262736   47076 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0803 23:44:04.262749   47076 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0803 23:44:04.262758   47076 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0803 23:44:04.262976   47076 command_runner.go:130] > # stream_port = "0"
	I0803 23:44:04.262988   47076 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0803 23:44:04.263290   47076 command_runner.go:130] > # stream_enable_tls = false
	I0803 23:44:04.263302   47076 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0803 23:44:04.263310   47076 command_runner.go:130] > # stream_idle_timeout = ""
	I0803 23:44:04.263321   47076 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0803 23:44:04.263334   47076 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0803 23:44:04.263340   47076 command_runner.go:130] > # minutes.
	I0803 23:44:04.263352   47076 command_runner.go:130] > # stream_tls_cert = ""
	I0803 23:44:04.263369   47076 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0803 23:44:04.263382   47076 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0803 23:44:04.263474   47076 command_runner.go:130] > # stream_tls_key = ""
	I0803 23:44:04.263494   47076 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0803 23:44:04.263505   47076 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0803 23:44:04.263538   47076 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0803 23:44:04.263547   47076 command_runner.go:130] > # stream_tls_ca = ""
	I0803 23:44:04.263560   47076 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0803 23:44:04.263571   47076 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0803 23:44:04.263585   47076 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0803 23:44:04.263596   47076 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0803 23:44:04.263612   47076 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0803 23:44:04.263629   47076 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0803 23:44:04.263642   47076 command_runner.go:130] > [crio.runtime]
	I0803 23:44:04.263654   47076 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0803 23:44:04.263666   47076 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0803 23:44:04.263675   47076 command_runner.go:130] > # "nofile=1024:2048"
	I0803 23:44:04.263685   47076 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0803 23:44:04.263693   47076 command_runner.go:130] > # default_ulimits = [
	I0803 23:44:04.263699   47076 command_runner.go:130] > # ]
	I0803 23:44:04.263709   47076 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0803 23:44:04.263718   47076 command_runner.go:130] > # no_pivot = false
	I0803 23:44:04.263726   47076 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0803 23:44:04.263739   47076 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0803 23:44:04.263751   47076 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0803 23:44:04.263763   47076 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0803 23:44:04.263771   47076 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0803 23:44:04.263783   47076 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0803 23:44:04.263794   47076 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0803 23:44:04.263801   47076 command_runner.go:130] > # Cgroup setting for conmon
	I0803 23:44:04.263812   47076 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0803 23:44:04.263821   47076 command_runner.go:130] > conmon_cgroup = "pod"
	I0803 23:44:04.263831   47076 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0803 23:44:04.263842   47076 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0803 23:44:04.263856   47076 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0803 23:44:04.263865   47076 command_runner.go:130] > conmon_env = [
	I0803 23:44:04.263873   47076 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0803 23:44:04.263885   47076 command_runner.go:130] > ]
	I0803 23:44:04.263895   47076 command_runner.go:130] > # Additional environment variables to set for all the
	I0803 23:44:04.263904   47076 command_runner.go:130] > # containers. These are overridden if set in the
	I0803 23:44:04.263920   47076 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0803 23:44:04.263929   47076 command_runner.go:130] > # default_env = [
	I0803 23:44:04.263934   47076 command_runner.go:130] > # ]
	I0803 23:44:04.263943   47076 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0803 23:44:04.263957   47076 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0803 23:44:04.263967   47076 command_runner.go:130] > # selinux = false
	I0803 23:44:04.263976   47076 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0803 23:44:04.263988   47076 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0803 23:44:04.263996   47076 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0803 23:44:04.264010   47076 command_runner.go:130] > # seccomp_profile = ""
	I0803 23:44:04.264025   47076 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0803 23:44:04.264042   47076 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0803 23:44:04.264054   47076 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0803 23:44:04.264063   47076 command_runner.go:130] > # which might increase security.
	I0803 23:44:04.264071   47076 command_runner.go:130] > # This option is currently deprecated,
	I0803 23:44:04.264083   47076 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0803 23:44:04.264090   47076 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0803 23:44:04.264103   47076 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0803 23:44:04.264115   47076 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0803 23:44:04.264128   47076 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0803 23:44:04.264139   47076 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0803 23:44:04.264150   47076 command_runner.go:130] > # This option supports live configuration reload.
	I0803 23:44:04.264159   47076 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0803 23:44:04.264170   47076 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0803 23:44:04.264177   47076 command_runner.go:130] > # the cgroup blockio controller.
	I0803 23:44:04.264184   47076 command_runner.go:130] > # blockio_config_file = ""
	I0803 23:44:04.264198   47076 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0803 23:44:04.264205   47076 command_runner.go:130] > # blockio parameters.
	I0803 23:44:04.264211   47076 command_runner.go:130] > # blockio_reload = false
	I0803 23:44:04.264221   47076 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0803 23:44:04.264234   47076 command_runner.go:130] > # irqbalance daemon.
	I0803 23:44:04.264244   47076 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0803 23:44:04.264257   47076 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0803 23:44:04.264271   47076 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0803 23:44:04.264281   47076 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0803 23:44:04.264297   47076 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0803 23:44:04.264313   47076 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0803 23:44:04.264321   47076 command_runner.go:130] > # This option supports live configuration reload.
	I0803 23:44:04.264333   47076 command_runner.go:130] > # rdt_config_file = ""
	I0803 23:44:04.264345   47076 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0803 23:44:04.264355   47076 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0803 23:44:04.264404   47076 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0803 23:44:04.264422   47076 command_runner.go:130] > # separate_pull_cgroup = ""
	I0803 23:44:04.264432   47076 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0803 23:44:04.264442   47076 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0803 23:44:04.264448   47076 command_runner.go:130] > # will be added.
	I0803 23:44:04.264455   47076 command_runner.go:130] > # default_capabilities = [
	I0803 23:44:04.264461   47076 command_runner.go:130] > # 	"CHOWN",
	I0803 23:44:04.264469   47076 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0803 23:44:04.264475   47076 command_runner.go:130] > # 	"FSETID",
	I0803 23:44:04.264480   47076 command_runner.go:130] > # 	"FOWNER",
	I0803 23:44:04.264488   47076 command_runner.go:130] > # 	"SETGID",
	I0803 23:44:04.264494   47076 command_runner.go:130] > # 	"SETUID",
	I0803 23:44:04.264499   47076 command_runner.go:130] > # 	"SETPCAP",
	I0803 23:44:04.264509   47076 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0803 23:44:04.264514   47076 command_runner.go:130] > # 	"KILL",
	I0803 23:44:04.264522   47076 command_runner.go:130] > # ]
	I0803 23:44:04.264534   47076 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0803 23:44:04.264547   47076 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0803 23:44:04.264555   47076 command_runner.go:130] > # add_inheritable_capabilities = false
	I0803 23:44:04.264567   47076 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0803 23:44:04.264579   47076 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0803 23:44:04.264587   47076 command_runner.go:130] > default_sysctls = [
	I0803 23:44:04.264597   47076 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0803 23:44:04.264605   47076 command_runner.go:130] > ]
	I0803 23:44:04.264612   47076 command_runner.go:130] > # List of devices on the host that a
	I0803 23:44:04.264625   47076 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0803 23:44:04.264633   47076 command_runner.go:130] > # allowed_devices = [
	I0803 23:44:04.264639   47076 command_runner.go:130] > # 	"/dev/fuse",
	I0803 23:44:04.264657   47076 command_runner.go:130] > # ]
	I0803 23:44:04.264664   47076 command_runner.go:130] > # List of additional devices. specified as
	I0803 23:44:04.264680   47076 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0803 23:44:04.264688   47076 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0803 23:44:04.264700   47076 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0803 23:44:04.264707   47076 command_runner.go:130] > # additional_devices = [
	I0803 23:44:04.264713   47076 command_runner.go:130] > # ]
	I0803 23:44:04.264721   47076 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0803 23:44:04.264732   47076 command_runner.go:130] > # cdi_spec_dirs = [
	I0803 23:44:04.264741   47076 command_runner.go:130] > # 	"/etc/cdi",
	I0803 23:44:04.264748   47076 command_runner.go:130] > # 	"/var/run/cdi",
	I0803 23:44:04.264755   47076 command_runner.go:130] > # ]
	I0803 23:44:04.264773   47076 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0803 23:44:04.264785   47076 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0803 23:44:04.264795   47076 command_runner.go:130] > # Defaults to false.
	I0803 23:44:04.264802   47076 command_runner.go:130] > # device_ownership_from_security_context = false
	I0803 23:44:04.264814   47076 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0803 23:44:04.264823   47076 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0803 23:44:04.264832   47076 command_runner.go:130] > # hooks_dir = [
	I0803 23:44:04.264842   47076 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0803 23:44:04.264851   47076 command_runner.go:130] > # ]
	I0803 23:44:04.264860   47076 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0803 23:44:04.264873   47076 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0803 23:44:04.264884   47076 command_runner.go:130] > # its default mounts from the following two files:
	I0803 23:44:04.264892   47076 command_runner.go:130] > #
	I0803 23:44:04.264903   47076 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0803 23:44:04.264916   47076 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0803 23:44:04.264928   47076 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0803 23:44:04.264935   47076 command_runner.go:130] > #
	I0803 23:44:04.264944   47076 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0803 23:44:04.264961   47076 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0803 23:44:04.264973   47076 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0803 23:44:04.264984   47076 command_runner.go:130] > #      only add mounts it finds in this file.
	I0803 23:44:04.264989   47076 command_runner.go:130] > #
	I0803 23:44:04.264996   47076 command_runner.go:130] > # default_mounts_file = ""
	I0803 23:44:04.265007   47076 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0803 23:44:04.265016   47076 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0803 23:44:04.265025   47076 command_runner.go:130] > pids_limit = 1024
	I0803 23:44:04.265039   47076 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0803 23:44:04.265051   47076 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0803 23:44:04.265071   47076 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0803 23:44:04.265088   47076 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0803 23:44:04.265103   47076 command_runner.go:130] > # log_size_max = -1
	I0803 23:44:04.265113   47076 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0803 23:44:04.265122   47076 command_runner.go:130] > # log_to_journald = false
	I0803 23:44:04.265136   47076 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0803 23:44:04.265146   47076 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0803 23:44:04.265160   47076 command_runner.go:130] > # Path to directory for container attach sockets.
	I0803 23:44:04.265176   47076 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0803 23:44:04.265188   47076 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0803 23:44:04.265196   47076 command_runner.go:130] > # bind_mount_prefix = ""
	I0803 23:44:04.265202   47076 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0803 23:44:04.265210   47076 command_runner.go:130] > # read_only = false
	I0803 23:44:04.265218   47076 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0803 23:44:04.265231   47076 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0803 23:44:04.265240   47076 command_runner.go:130] > # live configuration reload.
	I0803 23:44:04.265246   47076 command_runner.go:130] > # log_level = "info"
	I0803 23:44:04.265257   47076 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0803 23:44:04.265269   47076 command_runner.go:130] > # This option supports live configuration reload.
	I0803 23:44:04.265277   47076 command_runner.go:130] > # log_filter = ""
	I0803 23:44:04.265286   47076 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0803 23:44:04.265298   47076 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0803 23:44:04.265308   47076 command_runner.go:130] > # separated by comma.
	I0803 23:44:04.265319   47076 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0803 23:44:04.265328   47076 command_runner.go:130] > # uid_mappings = ""
	I0803 23:44:04.265338   47076 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0803 23:44:04.265362   47076 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0803 23:44:04.265372   47076 command_runner.go:130] > # separated by comma.
	I0803 23:44:04.265383   47076 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0803 23:44:04.265393   47076 command_runner.go:130] > # gid_mappings = ""
	I0803 23:44:04.265402   47076 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0803 23:44:04.265414   47076 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0803 23:44:04.265427   47076 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0803 23:44:04.265441   47076 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0803 23:44:04.265451   47076 command_runner.go:130] > # minimum_mappable_uid = -1
	I0803 23:44:04.265463   47076 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0803 23:44:04.265478   47076 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0803 23:44:04.265490   47076 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0803 23:44:04.265505   47076 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0803 23:44:04.265514   47076 command_runner.go:130] > # minimum_mappable_gid = -1
	I0803 23:44:04.265524   47076 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0803 23:44:04.265534   47076 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0803 23:44:04.265544   47076 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0803 23:44:04.265553   47076 command_runner.go:130] > # ctr_stop_timeout = 30
	I0803 23:44:04.265571   47076 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0803 23:44:04.265583   47076 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0803 23:44:04.265594   47076 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0803 23:44:04.265603   47076 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0803 23:44:04.265613   47076 command_runner.go:130] > drop_infra_ctr = false
	I0803 23:44:04.265622   47076 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0803 23:44:04.265633   47076 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0803 23:44:04.265643   47076 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0803 23:44:04.265652   47076 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0803 23:44:04.265664   47076 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0803 23:44:04.265675   47076 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0803 23:44:04.265686   47076 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0803 23:44:04.265713   47076 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0803 23:44:04.265722   47076 command_runner.go:130] > # shared_cpuset = ""
	I0803 23:44:04.265731   47076 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0803 23:44:04.265741   47076 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0803 23:44:04.265751   47076 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0803 23:44:04.265762   47076 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0803 23:44:04.265771   47076 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0803 23:44:04.265779   47076 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0803 23:44:04.265792   47076 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0803 23:44:04.265800   47076 command_runner.go:130] > # enable_criu_support = false
	I0803 23:44:04.265810   47076 command_runner.go:130] > # Enable/disable the generation of the container,
	I0803 23:44:04.265821   47076 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0803 23:44:04.265826   47076 command_runner.go:130] > # enable_pod_events = false
	I0803 23:44:04.265838   47076 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0803 23:44:04.265850   47076 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0803 23:44:04.265860   47076 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0803 23:44:04.265872   47076 command_runner.go:130] > # default_runtime = "runc"
	I0803 23:44:04.265882   47076 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0803 23:44:04.265896   47076 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0803 23:44:04.265910   47076 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0803 23:44:04.265920   47076 command_runner.go:130] > # creation as a file is not desired either.
	I0803 23:44:04.265933   47076 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0803 23:44:04.265948   47076 command_runner.go:130] > # the hostname is being managed dynamically.
	I0803 23:44:04.265957   47076 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0803 23:44:04.265966   47076 command_runner.go:130] > # ]
	I0803 23:44:04.265975   47076 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0803 23:44:04.265984   47076 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0803 23:44:04.265995   47076 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0803 23:44:04.266006   47076 command_runner.go:130] > # Each entry in the table should follow the format:
	I0803 23:44:04.266010   47076 command_runner.go:130] > #
	I0803 23:44:04.266017   47076 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0803 23:44:04.266028   47076 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0803 23:44:04.266111   47076 command_runner.go:130] > # runtime_type = "oci"
	I0803 23:44:04.266121   47076 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0803 23:44:04.266125   47076 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0803 23:44:04.266129   47076 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0803 23:44:04.266133   47076 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0803 23:44:04.266139   47076 command_runner.go:130] > # monitor_env = []
	I0803 23:44:04.266144   47076 command_runner.go:130] > # privileged_without_host_devices = false
	I0803 23:44:04.266149   47076 command_runner.go:130] > # allowed_annotations = []
	I0803 23:44:04.266154   47076 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0803 23:44:04.266159   47076 command_runner.go:130] > # Where:
	I0803 23:44:04.266164   47076 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0803 23:44:04.266169   47076 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0803 23:44:04.266177   47076 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0803 23:44:04.266183   47076 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0803 23:44:04.266189   47076 command_runner.go:130] > #   in $PATH.
	I0803 23:44:04.266194   47076 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0803 23:44:04.266199   47076 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0803 23:44:04.266205   47076 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0803 23:44:04.266211   47076 command_runner.go:130] > #   state.
	I0803 23:44:04.266217   47076 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0803 23:44:04.266225   47076 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0803 23:44:04.266233   47076 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0803 23:44:04.266238   47076 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0803 23:44:04.266246   47076 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0803 23:44:04.266252   47076 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0803 23:44:04.266259   47076 command_runner.go:130] > #   The currently recognized values are:
	I0803 23:44:04.266265   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0803 23:44:04.266273   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0803 23:44:04.266286   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0803 23:44:04.266294   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0803 23:44:04.266301   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0803 23:44:04.266309   47076 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0803 23:44:04.266315   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0803 23:44:04.266324   47076 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0803 23:44:04.266330   47076 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0803 23:44:04.266337   47076 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0803 23:44:04.266342   47076 command_runner.go:130] > #   deprecated option "conmon".
	I0803 23:44:04.266350   47076 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0803 23:44:04.266355   47076 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0803 23:44:04.266364   47076 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0803 23:44:04.266368   47076 command_runner.go:130] > #   should be moved to the container's cgroup
	I0803 23:44:04.266374   47076 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0803 23:44:04.266381   47076 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0803 23:44:04.266387   47076 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0803 23:44:04.266394   47076 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0803 23:44:04.266397   47076 command_runner.go:130] > #
	I0803 23:44:04.266404   47076 command_runner.go:130] > # Using the seccomp notifier feature:
	I0803 23:44:04.266407   47076 command_runner.go:130] > #
	I0803 23:44:04.266412   47076 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0803 23:44:04.266422   47076 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0803 23:44:04.266426   47076 command_runner.go:130] > #
	I0803 23:44:04.266431   47076 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0803 23:44:04.266441   47076 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0803 23:44:04.266444   47076 command_runner.go:130] > #
	I0803 23:44:04.266450   47076 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0803 23:44:04.266455   47076 command_runner.go:130] > # feature.
	I0803 23:44:04.266458   47076 command_runner.go:130] > #
	I0803 23:44:04.266463   47076 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0803 23:44:04.266471   47076 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0803 23:44:04.266477   47076 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0803 23:44:04.266484   47076 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0803 23:44:04.266490   47076 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0803 23:44:04.266493   47076 command_runner.go:130] > #
	I0803 23:44:04.266499   47076 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0803 23:44:04.266514   47076 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0803 23:44:04.266519   47076 command_runner.go:130] > #
	I0803 23:44:04.266524   47076 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0803 23:44:04.266532   47076 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0803 23:44:04.266535   47076 command_runner.go:130] > #
	I0803 23:44:04.266543   47076 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0803 23:44:04.266551   47076 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0803 23:44:04.266557   47076 command_runner.go:130] > # limitation.
	I0803 23:44:04.266561   47076 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0803 23:44:04.266565   47076 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0803 23:44:04.266569   47076 command_runner.go:130] > runtime_type = "oci"
	I0803 23:44:04.266575   47076 command_runner.go:130] > runtime_root = "/run/runc"
	I0803 23:44:04.266579   47076 command_runner.go:130] > runtime_config_path = ""
	I0803 23:44:04.266583   47076 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0803 23:44:04.266589   47076 command_runner.go:130] > monitor_cgroup = "pod"
	I0803 23:44:04.266593   47076 command_runner.go:130] > monitor_exec_cgroup = ""
	I0803 23:44:04.266597   47076 command_runner.go:130] > monitor_env = [
	I0803 23:44:04.266604   47076 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0803 23:44:04.266607   47076 command_runner.go:130] > ]
	I0803 23:44:04.266611   47076 command_runner.go:130] > privileged_without_host_devices = false
	I0803 23:44:04.266619   47076 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0803 23:44:04.266624   47076 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0803 23:44:04.266632   47076 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0803 23:44:04.266640   47076 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0803 23:44:04.266649   47076 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0803 23:44:04.266654   47076 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0803 23:44:04.266663   47076 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0803 23:44:04.266672   47076 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0803 23:44:04.266677   47076 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0803 23:44:04.266686   47076 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0803 23:44:04.266689   47076 command_runner.go:130] > # Example:
	I0803 23:44:04.266694   47076 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0803 23:44:04.266698   47076 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0803 23:44:04.266702   47076 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0803 23:44:04.266707   47076 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0803 23:44:04.266710   47076 command_runner.go:130] > # cpuset = 0
	I0803 23:44:04.266721   47076 command_runner.go:130] > # cpushares = "0-1"
	I0803 23:44:04.266724   47076 command_runner.go:130] > # Where:
	I0803 23:44:04.266730   47076 command_runner.go:130] > # The workload name is workload-type.
	I0803 23:44:04.266736   47076 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0803 23:44:04.266741   47076 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0803 23:44:04.266746   47076 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0803 23:44:04.266753   47076 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0803 23:44:04.266757   47076 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0803 23:44:04.266762   47076 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0803 23:44:04.266768   47076 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0803 23:44:04.266772   47076 command_runner.go:130] > # Default value is set to true
	I0803 23:44:04.266775   47076 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0803 23:44:04.266782   47076 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0803 23:44:04.266787   47076 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0803 23:44:04.266791   47076 command_runner.go:130] > # Default value is set to 'false'
	I0803 23:44:04.266795   47076 command_runner.go:130] > # disable_hostport_mapping = false
	I0803 23:44:04.266801   47076 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0803 23:44:04.266803   47076 command_runner.go:130] > #
	I0803 23:44:04.266808   47076 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0803 23:44:04.266814   47076 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0803 23:44:04.266819   47076 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0803 23:44:04.266825   47076 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0803 23:44:04.266832   47076 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0803 23:44:04.266835   47076 command_runner.go:130] > [crio.image]
	I0803 23:44:04.266840   47076 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0803 23:44:04.266844   47076 command_runner.go:130] > # default_transport = "docker://"
	I0803 23:44:04.266849   47076 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0803 23:44:04.266855   47076 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0803 23:44:04.266859   47076 command_runner.go:130] > # global_auth_file = ""
	I0803 23:44:04.266866   47076 command_runner.go:130] > # The image used to instantiate infra containers.
	I0803 23:44:04.266870   47076 command_runner.go:130] > # This option supports live configuration reload.
	I0803 23:44:04.266874   47076 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0803 23:44:04.266879   47076 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0803 23:44:04.266887   47076 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0803 23:44:04.266891   47076 command_runner.go:130] > # This option supports live configuration reload.
	I0803 23:44:04.266894   47076 command_runner.go:130] > # pause_image_auth_file = ""
	I0803 23:44:04.266905   47076 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0803 23:44:04.266913   47076 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0803 23:44:04.266921   47076 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0803 23:44:04.266928   47076 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0803 23:44:04.266934   47076 command_runner.go:130] > # pause_command = "/pause"
	I0803 23:44:04.266945   47076 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0803 23:44:04.266953   47076 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0803 23:44:04.266964   47076 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0803 23:44:04.266973   47076 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0803 23:44:04.266984   47076 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0803 23:44:04.266994   47076 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0803 23:44:04.267002   47076 command_runner.go:130] > # pinned_images = [
	I0803 23:44:04.267007   47076 command_runner.go:130] > # ]
	I0803 23:44:04.267018   47076 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0803 23:44:04.267034   47076 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0803 23:44:04.267046   47076 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0803 23:44:04.267057   47076 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0803 23:44:04.267067   47076 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0803 23:44:04.267078   47076 command_runner.go:130] > # signature_policy = ""
	I0803 23:44:04.267084   47076 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0803 23:44:04.267097   47076 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0803 23:44:04.267108   47076 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0803 23:44:04.267117   47076 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0803 23:44:04.267128   47076 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0803 23:44:04.267139   47076 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0803 23:44:04.267147   47076 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0803 23:44:04.267159   47076 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0803 23:44:04.267168   47076 command_runner.go:130] > # changing them here.
	I0803 23:44:04.267184   47076 command_runner.go:130] > # insecure_registries = [
	I0803 23:44:04.267192   47076 command_runner.go:130] > # ]
	I0803 23:44:04.267202   47076 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0803 23:44:04.267213   47076 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0803 23:44:04.267219   47076 command_runner.go:130] > # image_volumes = "mkdir"
	I0803 23:44:04.267228   47076 command_runner.go:130] > # Temporary directory to use for storing big files
	I0803 23:44:04.267237   47076 command_runner.go:130] > # big_files_temporary_dir = ""
	I0803 23:44:04.267243   47076 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0803 23:44:04.267253   47076 command_runner.go:130] > # CNI plugins.
	I0803 23:44:04.267258   47076 command_runner.go:130] > [crio.network]
	I0803 23:44:04.267264   47076 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0803 23:44:04.267273   47076 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0803 23:44:04.267279   47076 command_runner.go:130] > # cni_default_network = ""
	I0803 23:44:04.267285   47076 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0803 23:44:04.267290   47076 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0803 23:44:04.267301   47076 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0803 23:44:04.267307   47076 command_runner.go:130] > # plugin_dirs = [
	I0803 23:44:04.267315   47076 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0803 23:44:04.267320   47076 command_runner.go:130] > # ]
	I0803 23:44:04.267332   47076 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0803 23:44:04.267341   47076 command_runner.go:130] > [crio.metrics]
	I0803 23:44:04.267349   47076 command_runner.go:130] > # Globally enable or disable metrics support.
	I0803 23:44:04.267356   47076 command_runner.go:130] > enable_metrics = true
	I0803 23:44:04.267361   47076 command_runner.go:130] > # Specify enabled metrics collectors.
	I0803 23:44:04.267367   47076 command_runner.go:130] > # Per default all metrics are enabled.
	I0803 23:44:04.267373   47076 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0803 23:44:04.267382   47076 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0803 23:44:04.267387   47076 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0803 23:44:04.267391   47076 command_runner.go:130] > # metrics_collectors = [
	I0803 23:44:04.267395   47076 command_runner.go:130] > # 	"operations",
	I0803 23:44:04.267399   47076 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0803 23:44:04.267406   47076 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0803 23:44:04.267410   47076 command_runner.go:130] > # 	"operations_errors",
	I0803 23:44:04.267414   47076 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0803 23:44:04.267418   47076 command_runner.go:130] > # 	"image_pulls_by_name",
	I0803 23:44:04.267428   47076 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0803 23:44:04.267434   47076 command_runner.go:130] > # 	"image_pulls_failures",
	I0803 23:44:04.267443   47076 command_runner.go:130] > # 	"image_pulls_successes",
	I0803 23:44:04.267449   47076 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0803 23:44:04.267458   47076 command_runner.go:130] > # 	"image_layer_reuse",
	I0803 23:44:04.267467   47076 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0803 23:44:04.267474   47076 command_runner.go:130] > # 	"containers_oom_total",
	I0803 23:44:04.267482   47076 command_runner.go:130] > # 	"containers_oom",
	I0803 23:44:04.267489   47076 command_runner.go:130] > # 	"processes_defunct",
	I0803 23:44:04.267507   47076 command_runner.go:130] > # 	"operations_total",
	I0803 23:44:04.267561   47076 command_runner.go:130] > # 	"operations_latency_seconds",
	I0803 23:44:04.267578   47076 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0803 23:44:04.267582   47076 command_runner.go:130] > # 	"operations_errors_total",
	I0803 23:44:04.267589   47076 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0803 23:44:04.267594   47076 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0803 23:44:04.267600   47076 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0803 23:44:04.267605   47076 command_runner.go:130] > # 	"image_pulls_success_total",
	I0803 23:44:04.267609   47076 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0803 23:44:04.267616   47076 command_runner.go:130] > # 	"containers_oom_count_total",
	I0803 23:44:04.267622   47076 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0803 23:44:04.267627   47076 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0803 23:44:04.267645   47076 command_runner.go:130] > # ]
	I0803 23:44:04.267653   47076 command_runner.go:130] > # The port on which the metrics server will listen.
	I0803 23:44:04.267657   47076 command_runner.go:130] > # metrics_port = 9090
	I0803 23:44:04.267662   47076 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0803 23:44:04.267668   47076 command_runner.go:130] > # metrics_socket = ""
	I0803 23:44:04.267673   47076 command_runner.go:130] > # The certificate for the secure metrics server.
	I0803 23:44:04.267679   47076 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0803 23:44:04.267687   47076 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0803 23:44:04.267692   47076 command_runner.go:130] > # certificate on any modification event.
	I0803 23:44:04.267698   47076 command_runner.go:130] > # metrics_cert = ""
	I0803 23:44:04.267702   47076 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0803 23:44:04.267707   47076 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0803 23:44:04.267713   47076 command_runner.go:130] > # metrics_key = ""
	I0803 23:44:04.267719   47076 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0803 23:44:04.267725   47076 command_runner.go:130] > [crio.tracing]
	I0803 23:44:04.267730   47076 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0803 23:44:04.267736   47076 command_runner.go:130] > # enable_tracing = false
	I0803 23:44:04.267741   47076 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0803 23:44:04.267748   47076 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0803 23:44:04.267755   47076 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0803 23:44:04.267761   47076 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0803 23:44:04.267765   47076 command_runner.go:130] > # CRI-O NRI configuration.
	I0803 23:44:04.267769   47076 command_runner.go:130] > [crio.nri]
	I0803 23:44:04.267773   47076 command_runner.go:130] > # Globally enable or disable NRI.
	I0803 23:44:04.267782   47076 command_runner.go:130] > # enable_nri = false
	I0803 23:44:04.267788   47076 command_runner.go:130] > # NRI socket to listen on.
	I0803 23:44:04.267793   47076 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0803 23:44:04.267797   47076 command_runner.go:130] > # NRI plugin directory to use.
	I0803 23:44:04.267801   47076 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0803 23:44:04.267808   47076 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0803 23:44:04.267812   47076 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0803 23:44:04.267819   47076 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0803 23:44:04.267828   47076 command_runner.go:130] > # nri_disable_connections = false
	I0803 23:44:04.267835   47076 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0803 23:44:04.267840   47076 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0803 23:44:04.267845   47076 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0803 23:44:04.267852   47076 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0803 23:44:04.267858   47076 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0803 23:44:04.267863   47076 command_runner.go:130] > [crio.stats]
	I0803 23:44:04.267869   47076 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0803 23:44:04.267874   47076 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0803 23:44:04.267881   47076 command_runner.go:130] > # stats_collection_period = 0
	I0803 23:44:04.267912   47076 command_runner.go:130] ! time="2024-08-03 23:44:04.221797741Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0803 23:44:04.267929   47076 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0803 23:44:04.268083   47076 cni.go:84] Creating CNI manager for ""
	I0803 23:44:04.268101   47076 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0803 23:44:04.268114   47076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:44:04.268143   47076 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-626202 NodeName:multinode-626202 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:44:04.268264   47076 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-626202"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:44:04.268328   47076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:44:04.279466   47076 command_runner.go:130] > kubeadm
	I0803 23:44:04.279485   47076 command_runner.go:130] > kubectl
	I0803 23:44:04.279490   47076 command_runner.go:130] > kubelet
	I0803 23:44:04.279506   47076 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:44:04.279567   47076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 23:44:04.290075   47076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0803 23:44:04.308331   47076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:44:04.325905   47076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0803 23:44:04.343037   47076 ssh_runner.go:195] Run: grep 192.168.39.176	control-plane.minikube.internal$ /etc/hosts
	I0803 23:44:04.347411   47076 command_runner.go:130] > 192.168.39.176	control-plane.minikube.internal
	I0803 23:44:04.347492   47076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:44:04.485790   47076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:44:04.503105   47076 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202 for IP: 192.168.39.176
	I0803 23:44:04.503128   47076 certs.go:194] generating shared ca certs ...
	I0803 23:44:04.503149   47076 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:44:04.503307   47076 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 23:44:04.503362   47076 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 23:44:04.503375   47076 certs.go:256] generating profile certs ...
	I0803 23:44:04.503493   47076 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/client.key
	I0803 23:44:04.503572   47076 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/apiserver.key.a1d01b81
	I0803 23:44:04.503621   47076 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/proxy-client.key
	I0803 23:44:04.503635   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:44:04.503656   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:44:04.503674   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:44:04.503698   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:44:04.503718   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:44:04.503737   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:44:04.503755   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:44:04.503772   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:44:04.503843   47076 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0803 23:44:04.503881   47076 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0803 23:44:04.503893   47076 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 23:44:04.503930   47076 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 23:44:04.503973   47076 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:44:04.504005   47076 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 23:44:04.504063   47076 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:44:04.504111   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> /usr/share/ca-certificates/167952.pem
	I0803 23:44:04.504133   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:44:04.504159   47076 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem -> /usr/share/ca-certificates/16795.pem
	I0803 23:44:04.504750   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:44:04.532103   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:44:04.559044   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:44:04.586637   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 23:44:04.613899   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0803 23:44:04.639888   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 23:44:04.669124   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:44:04.696224   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/multinode-626202/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:44:04.722613   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0803 23:44:04.749586   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:44:04.777627   47076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0803 23:44:04.805629   47076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:44:04.824455   47076 ssh_runner.go:195] Run: openssl version
	I0803 23:44:04.830675   47076 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0803 23:44:04.830860   47076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:44:04.844332   47076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:44:04.849332   47076 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:44:04.849377   47076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:44:04.849428   47076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:44:04.855600   47076 command_runner.go:130] > b5213941
	I0803 23:44:04.856218   47076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:44:04.868766   47076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0803 23:44:04.882510   47076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0803 23:44:04.887796   47076 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:44:04.888069   47076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:44:04.888132   47076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0803 23:44:04.894011   47076 command_runner.go:130] > 51391683
	I0803 23:44:04.894310   47076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0803 23:44:04.905669   47076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0803 23:44:04.918416   47076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0803 23:44:04.923792   47076 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:44:04.923924   47076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:44:04.923978   47076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0803 23:44:04.929865   47076 command_runner.go:130] > 3ec20f2e
	I0803 23:44:04.929950   47076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:44:04.941014   47076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:44:04.945914   47076 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:44:04.945961   47076 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0803 23:44:04.945989   47076 command_runner.go:130] > Device: 253,1	Inode: 5244971     Links: 1
	I0803 23:44:04.946005   47076 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0803 23:44:04.946019   47076 command_runner.go:130] > Access: 2024-08-03 23:37:03.769188496 +0000
	I0803 23:44:04.946029   47076 command_runner.go:130] > Modify: 2024-08-03 23:37:03.769188496 +0000
	I0803 23:44:04.946039   47076 command_runner.go:130] > Change: 2024-08-03 23:37:03.769188496 +0000
	I0803 23:44:04.946049   47076 command_runner.go:130] >  Birth: 2024-08-03 23:37:03.769188496 +0000
	I0803 23:44:04.946165   47076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 23:44:04.952170   47076 command_runner.go:130] > Certificate will not expire
	I0803 23:44:04.952372   47076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 23:44:04.958694   47076 command_runner.go:130] > Certificate will not expire
	I0803 23:44:04.958768   47076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 23:44:04.964948   47076 command_runner.go:130] > Certificate will not expire
	I0803 23:44:04.965023   47076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 23:44:04.971341   47076 command_runner.go:130] > Certificate will not expire
	I0803 23:44:04.971414   47076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 23:44:04.977645   47076 command_runner.go:130] > Certificate will not expire
	I0803 23:44:04.977947   47076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 23:44:04.984286   47076 command_runner.go:130] > Certificate will not expire
	I0803 23:44:04.984358   47076 kubeadm.go:392] StartCluster: {Name:multinode-626202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-626202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:44:04.984501   47076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:44:04.984560   47076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:44:05.031734   47076 command_runner.go:130] > ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649
	I0803 23:44:05.031756   47076 command_runner.go:130] > 7a258641e738f9ae8cc2ed8329803d2e21e651613b61b138256356eee892088c
	I0803 23:44:05.031762   47076 command_runner.go:130] > 52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d
	I0803 23:44:05.031770   47076 command_runner.go:130] > 7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31
	I0803 23:44:05.031778   47076 command_runner.go:130] > 661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed
	I0803 23:44:05.031786   47076 command_runner.go:130] > 08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511
	I0803 23:44:05.031847   47076 command_runner.go:130] > 10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f
	I0803 23:44:05.031892   47076 command_runner.go:130] > b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509
	I0803 23:44:05.033880   47076 cri.go:89] found id: "ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649"
	I0803 23:44:05.033897   47076 cri.go:89] found id: "7a258641e738f9ae8cc2ed8329803d2e21e651613b61b138256356eee892088c"
	I0803 23:44:05.033900   47076 cri.go:89] found id: "52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d"
	I0803 23:44:05.033903   47076 cri.go:89] found id: "7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31"
	I0803 23:44:05.033906   47076 cri.go:89] found id: "661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed"
	I0803 23:44:05.033909   47076 cri.go:89] found id: "08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511"
	I0803 23:44:05.033912   47076 cri.go:89] found id: "10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f"
	I0803 23:44:05.033914   47076 cri.go:89] found id: "b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509"
	I0803 23:44:05.033918   47076 cri.go:89] found id: ""
	I0803 23:44:05.033985   47076 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.847865190Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722728893847842809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c33c8b5-228e-400b-b599-9663a4d8b4b9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.848621838Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7bc8176-ee08-4884-b1bf-c7befef333df name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.848694655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7bc8176-ee08-4884-b1bf-c7befef333df name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.849093578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf31911075b9b1edd637b7b575954794781e71aa72e9143716cbeb7ed1f6915e,PodSandboxId:adad4dd335f68e378bd838ac9e8766ba63753061fed3f5c4bb7feeed73c2f9a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722728685884552568,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2dba722c179a4a30552f5af50d76eca683193b0a6484487770a3dd8bb4eee11,PodSandboxId:a189a8637d24db7a906094d42ab0047658c2b24c38e03bbcacb6c673a2eae26a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722728652407186055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce603ac16e9ae5dd46dc3b3eadde80c28b01868b0e9374cd000506780001c83,PodSandboxId:7def7c2c9c103841fa70784d535819c7ac16d577228e6aba95de2dfc7975f3fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728652358764948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e933251cf98c19f63ba7dc3aca722cfe7bd5d85499591f135ec09da928b758fb,PodSandboxId:ccc61d59726719f7dfae99201a8b848eb141a9f4838b2732f704b8acd792a663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722728652225599901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]
string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7cb032cc115fc98972f1edd080bae32ebe9f751d3bd4d407e242a59168116f3,PodSandboxId:a8684e3d62987c29e2e099e922fc32783e52f11300e10aab32117ec56ff14468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728652228948780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.ku
bernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d5e27a3f1a2b539ac64217ff2530d7f922691a1c792cdcc0cff2522b77d7f6,PodSandboxId:2bd384bb5a28b6264723f25f7d3563d288dbd38eab5fd22deac98e61120f8872,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728647390735689,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306098d465b998985517da6000dbb16bb97dd16e5202e0818c4d43c9ac33ced,PodSandboxId:78b2c2c7a7a681f954a40b4bf20f365fa14472fb903f15a101d6ced6fab07202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728647396122689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5fe95be143b57cff55a5e6398be7fcf431c52eba68efe1dfd18559c58d5c23,PodSandboxId:c021f12a225f344022db160588755786e7d6171ec65565c3e7d09e4afc4b0166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728647306200190,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fae4409b535bab6700793ef3f80520e5c1c30709255101d6f1268f4769a72fc,PodSandboxId:4765d15aa38e7c2c069c01d56f32645231f8e9b07b3a52a2444adecc5439f11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728647299127752,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27175547d279f0ad5cd63ba304ce31415d589ef94f9fa6f3ccfcc5fd3230dc73,PodSandboxId:8b26b6ab2746a895c12c033b3de8d1063feb220bf68584015d9c0d62e0225401,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722728323214680052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649,PodSandboxId:5d2f859ce857dd7f2bc9633b23661dd500c49c3d6a1384b236dd44aa759f8cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728264728692376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a258641e738f9ae8cc2ed8329803d2e21e651613b61b138256356eee892088c,PodSandboxId:acc17b1aeafc40ff353266ffb0fec10dba76435e1e769c3a715d163dae86d437,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722728264661011746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.kubernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d,PodSandboxId:7434a1f6067be87e858afb6286b91a9f254775b7071efb3784f1e087abbe8046,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728252679722241,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31,PodSandboxId:9a2ee63ac405ad1b3d3928aca1e02bbe7169c5b06d8693d32f773947db605785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728248784734123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed,PodSandboxId:bdb3f8907608202ff6896273d9113b2a2ed0fd0cfd25b04db52dc4b60606301c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728228296147286,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d
,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511,PodSandboxId:334b0982686a3a324ac01ae386d9c2be5ae576ec3f379cab82ab3dbbbc77546d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722728228259112023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:
map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f,PodSandboxId:f746e6aa04c0c34b0101d6494c74a218496949fe3274aca2c7494d2312c82642,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722728228216824235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509,PodSandboxId:6e1b8fce3835e850f1bc6e8ba513379e7bc334fae4cc98ceeecf169185c3a5e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722728228158748513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7bc8176-ee08-4884-b1bf-c7befef333df name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.890632799Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=914882e0-1a6b-4574-a2cf-89e004da03c5 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.890722110Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=914882e0-1a6b-4574-a2cf-89e004da03c5 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.893047084Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5b9f757-e95d-4edf-942d-4eda1fcc59b6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.893554955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722728893893531893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5b9f757-e95d-4edf-942d-4eda1fcc59b6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.894305658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=737856fa-9e76-49bc-9873-6436698ee19c name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.894379028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=737856fa-9e76-49bc-9873-6436698ee19c name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.894709927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf31911075b9b1edd637b7b575954794781e71aa72e9143716cbeb7ed1f6915e,PodSandboxId:adad4dd335f68e378bd838ac9e8766ba63753061fed3f5c4bb7feeed73c2f9a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722728685884552568,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2dba722c179a4a30552f5af50d76eca683193b0a6484487770a3dd8bb4eee11,PodSandboxId:a189a8637d24db7a906094d42ab0047658c2b24c38e03bbcacb6c673a2eae26a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722728652407186055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce603ac16e9ae5dd46dc3b3eadde80c28b01868b0e9374cd000506780001c83,PodSandboxId:7def7c2c9c103841fa70784d535819c7ac16d577228e6aba95de2dfc7975f3fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728652358764948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e933251cf98c19f63ba7dc3aca722cfe7bd5d85499591f135ec09da928b758fb,PodSandboxId:ccc61d59726719f7dfae99201a8b848eb141a9f4838b2732f704b8acd792a663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722728652225599901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]
string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7cb032cc115fc98972f1edd080bae32ebe9f751d3bd4d407e242a59168116f3,PodSandboxId:a8684e3d62987c29e2e099e922fc32783e52f11300e10aab32117ec56ff14468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728652228948780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.ku
bernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d5e27a3f1a2b539ac64217ff2530d7f922691a1c792cdcc0cff2522b77d7f6,PodSandboxId:2bd384bb5a28b6264723f25f7d3563d288dbd38eab5fd22deac98e61120f8872,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728647390735689,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306098d465b998985517da6000dbb16bb97dd16e5202e0818c4d43c9ac33ced,PodSandboxId:78b2c2c7a7a681f954a40b4bf20f365fa14472fb903f15a101d6ced6fab07202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728647396122689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5fe95be143b57cff55a5e6398be7fcf431c52eba68efe1dfd18559c58d5c23,PodSandboxId:c021f12a225f344022db160588755786e7d6171ec65565c3e7d09e4afc4b0166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728647306200190,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fae4409b535bab6700793ef3f80520e5c1c30709255101d6f1268f4769a72fc,PodSandboxId:4765d15aa38e7c2c069c01d56f32645231f8e9b07b3a52a2444adecc5439f11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728647299127752,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27175547d279f0ad5cd63ba304ce31415d589ef94f9fa6f3ccfcc5fd3230dc73,PodSandboxId:8b26b6ab2746a895c12c033b3de8d1063feb220bf68584015d9c0d62e0225401,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722728323214680052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649,PodSandboxId:5d2f859ce857dd7f2bc9633b23661dd500c49c3d6a1384b236dd44aa759f8cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728264728692376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a258641e738f9ae8cc2ed8329803d2e21e651613b61b138256356eee892088c,PodSandboxId:acc17b1aeafc40ff353266ffb0fec10dba76435e1e769c3a715d163dae86d437,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722728264661011746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.kubernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d,PodSandboxId:7434a1f6067be87e858afb6286b91a9f254775b7071efb3784f1e087abbe8046,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728252679722241,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31,PodSandboxId:9a2ee63ac405ad1b3d3928aca1e02bbe7169c5b06d8693d32f773947db605785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728248784734123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed,PodSandboxId:bdb3f8907608202ff6896273d9113b2a2ed0fd0cfd25b04db52dc4b60606301c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728228296147286,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d
,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511,PodSandboxId:334b0982686a3a324ac01ae386d9c2be5ae576ec3f379cab82ab3dbbbc77546d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722728228259112023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:
map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f,PodSandboxId:f746e6aa04c0c34b0101d6494c74a218496949fe3274aca2c7494d2312c82642,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722728228216824235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509,PodSandboxId:6e1b8fce3835e850f1bc6e8ba513379e7bc334fae4cc98ceeecf169185c3a5e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722728228158748513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=737856fa-9e76-49bc-9873-6436698ee19c name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.937368328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da4f645c-cd9a-4479-8baf-df4932fc6482 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.937673027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da4f645c-cd9a-4479-8baf-df4932fc6482 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.938864722Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=082cd252-cca3-4c83-83c4-cf95721f9a67 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.939387668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722728893939341017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=082cd252-cca3-4c83-83c4-cf95721f9a67 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.939897181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d855550e-d5e1-4f72-b356-bad71010016d name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.939952520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d855550e-d5e1-4f72-b356-bad71010016d name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.940335379Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf31911075b9b1edd637b7b575954794781e71aa72e9143716cbeb7ed1f6915e,PodSandboxId:adad4dd335f68e378bd838ac9e8766ba63753061fed3f5c4bb7feeed73c2f9a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722728685884552568,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2dba722c179a4a30552f5af50d76eca683193b0a6484487770a3dd8bb4eee11,PodSandboxId:a189a8637d24db7a906094d42ab0047658c2b24c38e03bbcacb6c673a2eae26a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722728652407186055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce603ac16e9ae5dd46dc3b3eadde80c28b01868b0e9374cd000506780001c83,PodSandboxId:7def7c2c9c103841fa70784d535819c7ac16d577228e6aba95de2dfc7975f3fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728652358764948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e933251cf98c19f63ba7dc3aca722cfe7bd5d85499591f135ec09da928b758fb,PodSandboxId:ccc61d59726719f7dfae99201a8b848eb141a9f4838b2732f704b8acd792a663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722728652225599901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]
string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7cb032cc115fc98972f1edd080bae32ebe9f751d3bd4d407e242a59168116f3,PodSandboxId:a8684e3d62987c29e2e099e922fc32783e52f11300e10aab32117ec56ff14468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728652228948780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.ku
bernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d5e27a3f1a2b539ac64217ff2530d7f922691a1c792cdcc0cff2522b77d7f6,PodSandboxId:2bd384bb5a28b6264723f25f7d3563d288dbd38eab5fd22deac98e61120f8872,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728647390735689,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306098d465b998985517da6000dbb16bb97dd16e5202e0818c4d43c9ac33ced,PodSandboxId:78b2c2c7a7a681f954a40b4bf20f365fa14472fb903f15a101d6ced6fab07202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728647396122689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5fe95be143b57cff55a5e6398be7fcf431c52eba68efe1dfd18559c58d5c23,PodSandboxId:c021f12a225f344022db160588755786e7d6171ec65565c3e7d09e4afc4b0166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728647306200190,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fae4409b535bab6700793ef3f80520e5c1c30709255101d6f1268f4769a72fc,PodSandboxId:4765d15aa38e7c2c069c01d56f32645231f8e9b07b3a52a2444adecc5439f11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728647299127752,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27175547d279f0ad5cd63ba304ce31415d589ef94f9fa6f3ccfcc5fd3230dc73,PodSandboxId:8b26b6ab2746a895c12c033b3de8d1063feb220bf68584015d9c0d62e0225401,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722728323214680052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649,PodSandboxId:5d2f859ce857dd7f2bc9633b23661dd500c49c3d6a1384b236dd44aa759f8cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728264728692376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a258641e738f9ae8cc2ed8329803d2e21e651613b61b138256356eee892088c,PodSandboxId:acc17b1aeafc40ff353266ffb0fec10dba76435e1e769c3a715d163dae86d437,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722728264661011746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.kubernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d,PodSandboxId:7434a1f6067be87e858afb6286b91a9f254775b7071efb3784f1e087abbe8046,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728252679722241,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31,PodSandboxId:9a2ee63ac405ad1b3d3928aca1e02bbe7169c5b06d8693d32f773947db605785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728248784734123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed,PodSandboxId:bdb3f8907608202ff6896273d9113b2a2ed0fd0cfd25b04db52dc4b60606301c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728228296147286,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d
,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511,PodSandboxId:334b0982686a3a324ac01ae386d9c2be5ae576ec3f379cab82ab3dbbbc77546d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722728228259112023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:
map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f,PodSandboxId:f746e6aa04c0c34b0101d6494c74a218496949fe3274aca2c7494d2312c82642,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722728228216824235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509,PodSandboxId:6e1b8fce3835e850f1bc6e8ba513379e7bc334fae4cc98ceeecf169185c3a5e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722728228158748513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d855550e-d5e1-4f72-b356-bad71010016d name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.983894356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e8bcbeb-3f1e-49fe-8ae9-0601214fb1af name=/runtime.v1.RuntimeService/Version
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.983976684Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e8bcbeb-3f1e-49fe-8ae9-0601214fb1af name=/runtime.v1.RuntimeService/Version
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.985310382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c419b4e6-b1e9-4288-b8c3-c7875c344bc5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.985738087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722728893985716952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c419b4e6-b1e9-4288-b8c3-c7875c344bc5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.986427098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a5bc23a-de5f-43ba-9985-cffc988c9be5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.986486525Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a5bc23a-de5f-43ba-9985-cffc988c9be5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:48:13 multinode-626202 crio[2916]: time="2024-08-03 23:48:13.986833035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf31911075b9b1edd637b7b575954794781e71aa72e9143716cbeb7ed1f6915e,PodSandboxId:adad4dd335f68e378bd838ac9e8766ba63753061fed3f5c4bb7feeed73c2f9a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722728685884552568,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2dba722c179a4a30552f5af50d76eca683193b0a6484487770a3dd8bb4eee11,PodSandboxId:a189a8637d24db7a906094d42ab0047658c2b24c38e03bbcacb6c673a2eae26a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722728652407186055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce603ac16e9ae5dd46dc3b3eadde80c28b01868b0e9374cd000506780001c83,PodSandboxId:7def7c2c9c103841fa70784d535819c7ac16d577228e6aba95de2dfc7975f3fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728652358764948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e933251cf98c19f63ba7dc3aca722cfe7bd5d85499591f135ec09da928b758fb,PodSandboxId:ccc61d59726719f7dfae99201a8b848eb141a9f4838b2732f704b8acd792a663,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722728652225599901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]
string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7cb032cc115fc98972f1edd080bae32ebe9f751d3bd4d407e242a59168116f3,PodSandboxId:a8684e3d62987c29e2e099e922fc32783e52f11300e10aab32117ec56ff14468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728652228948780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.ku
bernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22d5e27a3f1a2b539ac64217ff2530d7f922691a1c792cdcc0cff2522b77d7f6,PodSandboxId:2bd384bb5a28b6264723f25f7d3563d288dbd38eab5fd22deac98e61120f8872,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728647390735689,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306098d465b998985517da6000dbb16bb97dd16e5202e0818c4d43c9ac33ced,PodSandboxId:78b2c2c7a7a681f954a40b4bf20f365fa14472fb903f15a101d6ced6fab07202,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728647396122689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5fe95be143b57cff55a5e6398be7fcf431c52eba68efe1dfd18559c58d5c23,PodSandboxId:c021f12a225f344022db160588755786e7d6171ec65565c3e7d09e4afc4b0166,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728647306200190,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fae4409b535bab6700793ef3f80520e5c1c30709255101d6f1268f4769a72fc,PodSandboxId:4765d15aa38e7c2c069c01d56f32645231f8e9b07b3a52a2444adecc5439f11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728647299127752,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27175547d279f0ad5cd63ba304ce31415d589ef94f9fa6f3ccfcc5fd3230dc73,PodSandboxId:8b26b6ab2746a895c12c033b3de8d1063feb220bf68584015d9c0d62e0225401,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722728323214680052,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-lj84f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb88a599-41f0-473d-bc71-5a243ed5cd94,},Annotations:map[string]string{io.kubernetes.container.hash: 4ee34b81,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649,PodSandboxId:5d2f859ce857dd7f2bc9633b23661dd500c49c3d6a1384b236dd44aa759f8cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728264728692376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-29fhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c55ec034-631a-42e1-acb6-43ee3f34bbfc,},Annotations:map[string]string{io.kubernetes.container.hash: 920d905f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a258641e738f9ae8cc2ed8329803d2e21e651613b61b138256356eee892088c,PodSandboxId:acc17b1aeafc40ff353266ffb0fec10dba76435e1e769c3a715d163dae86d437,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722728264661011746,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: e487f793-a9e4-4a2b-a25a-b474c986a645,},Annotations:map[string]string{io.kubernetes.container.hash: cbbde2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d,PodSandboxId:7434a1f6067be87e858afb6286b91a9f254775b7071efb3784f1e087abbe8046,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728252679722241,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jhldg,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a7697739-ef34-41a5-b70f-3f49e921a47c,},Annotations:map[string]string{io.kubernetes.container.hash: 30393996,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31,PodSandboxId:9a2ee63ac405ad1b3d3928aca1e02bbe7169c5b06d8693d32f773947db605785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728248784734123,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-26jcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5f3a35b2-712a-4122-a882-20045d9785bf,},Annotations:map[string]string{io.kubernetes.container.hash: f17e654c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed,PodSandboxId:bdb3f8907608202ff6896273d9113b2a2ed0fd0cfd25b04db52dc4b60606301c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728228296147286,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c59cd2a8bc1bddf3e9b7e3d26922642d
,},Annotations:map[string]string{io.kubernetes.container.hash: 9bde6d8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511,PodSandboxId:334b0982686a3a324ac01ae386d9c2be5ae576ec3f379cab82ab3dbbbc77546d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722728228259112023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6b6077a6b3f22e2edd80a6503d2c868,},Annotations:
map[string]string{io.kubernetes.container.hash: a4218e4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f,PodSandboxId:f746e6aa04c0c34b0101d6494c74a218496949fe3274aca2c7494d2312c82642,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722728228216824235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 506d62bf5b0c77d618242a218161f4f4,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509,PodSandboxId:6e1b8fce3835e850f1bc6e8ba513379e7bc334fae4cc98ceeecf169185c3a5e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722728228158748513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-626202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2ba33ad6c9c4e6023fa8826b4e642f,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a5bc23a-de5f-43ba-9985-cffc988c9be5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bf31911075b9b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   adad4dd335f68       busybox-fc5497c4f-lj84f
	a2dba722c179a       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   a189a8637d24d       kindnet-jhldg
	1ce603ac16e9a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   7def7c2c9c103       coredns-7db6d8ff4d-29fhz
	f7cb032cc115f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   a8684e3d62987       storage-provisioner
	e933251cf98c1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   ccc61d5972671       kube-proxy-26jcw
	3306098d465b9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   78b2c2c7a7a68       kube-scheduler-multinode-626202
	22d5e27a3f1a2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   2bd384bb5a28b       etcd-multinode-626202
	8a5fe95be143b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   c021f12a225f3       kube-apiserver-multinode-626202
	0fae4409b535b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   4765d15aa38e7       kube-controller-manager-multinode-626202
	27175547d279f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   8b26b6ab2746a       busybox-fc5497c4f-lj84f
	ed8181672e8c9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   5d2f859ce857d       coredns-7db6d8ff4d-29fhz
	7a258641e738f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   acc17b1aeafc4       storage-provisioner
	52ac99500c4cf       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   7434a1f6067be       kindnet-jhldg
	7e8ea75035d5c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   9a2ee63ac405a       kube-proxy-26jcw
	661f87888da85       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   bdb3f89076082       etcd-multinode-626202
	08f8e99e72584       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   334b0982686a3       kube-apiserver-multinode-626202
	10ca9a5bb8d9c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   f746e6aa04c0c       kube-scheduler-multinode-626202
	b7994126d209c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   6e1b8fce3835e       kube-controller-manager-multinode-626202
	
	
	==> coredns [1ce603ac16e9ae5dd46dc3b3eadde80c28b01868b0e9374cd000506780001c83] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36971 - 16893 "HINFO IN 9099449929247806992.1502052661215963512. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015019218s
	
	
	==> coredns [ed8181672e8c9c14910561d5cff036af656bdb6b3706aecacb23ba6736c7b649] <==
	[INFO] 10.244.0.3:36685 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001840134s
	[INFO] 10.244.0.3:45668 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088915s
	[INFO] 10.244.0.3:33174 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000035022s
	[INFO] 10.244.0.3:42026 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001122494s
	[INFO] 10.244.0.3:46337 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043322s
	[INFO] 10.244.0.3:44769 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046396s
	[INFO] 10.244.0.3:45333 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089521s
	[INFO] 10.244.1.2:55558 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170162s
	[INFO] 10.244.1.2:55873 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182693s
	[INFO] 10.244.1.2:40311 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174612s
	[INFO] 10.244.1.2:53164 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063875s
	[INFO] 10.244.0.3:40527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130165s
	[INFO] 10.244.0.3:56260 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082293s
	[INFO] 10.244.0.3:60666 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062261s
	[INFO] 10.244.0.3:60582 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063584s
	[INFO] 10.244.1.2:50593 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134382s
	[INFO] 10.244.1.2:57543 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150736s
	[INFO] 10.244.1.2:45727 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127395s
	[INFO] 10.244.1.2:58801 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000149701s
	[INFO] 10.244.0.3:33370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011872s
	[INFO] 10.244.0.3:49551 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130042s
	[INFO] 10.244.0.3:56273 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000085081s
	[INFO] 10.244.0.3:51286 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056031s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-626202
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-626202
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=multinode-626202
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_37_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-626202
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:48:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:44:11 +0000   Sat, 03 Aug 2024 23:37:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:44:11 +0000   Sat, 03 Aug 2024 23:37:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:44:11 +0000   Sat, 03 Aug 2024 23:37:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:44:11 +0000   Sat, 03 Aug 2024 23:37:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    multinode-626202
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 447bbcc6652343e7a8f7b43a086853c1
	  System UUID:                447bbcc6-6523-43e7-a8f7-b43a086853c1
	  Boot ID:                    20a00bd5-bce6-4c4b-b103-e4236543bb16
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lj84f                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	  kube-system                 coredns-7db6d8ff4d-29fhz                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-626202                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-jhldg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-626202             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-626202    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-26jcw                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-626202             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node multinode-626202 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node multinode-626202 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)    kubelet          Node multinode-626202 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                  kubelet          Node multinode-626202 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                  kubelet          Node multinode-626202 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                  kubelet          Node multinode-626202 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-626202 event: Registered Node multinode-626202 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-626202 status is now: NodeReady
	  Normal  Starting                 4m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-626202 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-626202 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-626202 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m51s                node-controller  Node multinode-626202 event: Registered Node multinode-626202 in Controller
	
	
	Name:               multinode-626202-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-626202-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=multinode-626202
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_44_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:44:48 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-626202-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:45:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 03 Aug 2024 23:45:19 +0000   Sat, 03 Aug 2024 23:46:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 03 Aug 2024 23:45:19 +0000   Sat, 03 Aug 2024 23:46:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 03 Aug 2024 23:45:19 +0000   Sat, 03 Aug 2024 23:46:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 03 Aug 2024 23:45:19 +0000   Sat, 03 Aug 2024 23:46:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    multinode-626202-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5258a704ee2645088936da00b0d990e8
	  System UUID:                5258a704-ee26-4508-8936-da00b0d990e8
	  Boot ID:                    9c64f6bf-023b-49d5-a6fb-dfd7a5691bb0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pzwdv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kindnet-4vv8k              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m57s
	  kube-system                 kube-proxy-hb6jt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m51s                  kube-proxy       
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m58s (x2 over 9m58s)  kubelet          Node multinode-626202-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m58s (x2 over 9m58s)  kubelet          Node multinode-626202-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m58s (x2 over 9m58s)  kubelet          Node multinode-626202-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m37s                  kubelet          Node multinode-626202-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m26s (x2 over 3m26s)  kubelet          Node multinode-626202-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m26s (x2 over 3m26s)  kubelet          Node multinode-626202-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m26s (x2 over 3m26s)  kubelet          Node multinode-626202-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m21s                  node-controller  Node multinode-626202-m02 event: Registered Node multinode-626202-m02 in Controller
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-626202-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-626202-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.053105] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.188518] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.118986] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.271804] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[Aug 3 23:37] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[  +5.405608] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[  +0.061146] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.993135] systemd-fstab-generator[1292]: Ignoring "noauto" option for root device
	[  +0.085087] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.161074] systemd-fstab-generator[1494]: Ignoring "noauto" option for root device
	[  +0.107884] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.017324] kauditd_printk_skb: 56 callbacks suppressed
	[Aug 3 23:38] kauditd_printk_skb: 14 callbacks suppressed
	[Aug 3 23:44] systemd-fstab-generator[2835]: Ignoring "noauto" option for root device
	[  +0.155883] systemd-fstab-generator[2847]: Ignoring "noauto" option for root device
	[  +0.164953] systemd-fstab-generator[2861]: Ignoring "noauto" option for root device
	[  +0.139242] systemd-fstab-generator[2873]: Ignoring "noauto" option for root device
	[  +0.276408] systemd-fstab-generator[2901]: Ignoring "noauto" option for root device
	[  +1.923698] systemd-fstab-generator[3001]: Ignoring "noauto" option for root device
	[  +1.986029] systemd-fstab-generator[3126]: Ignoring "noauto" option for root device
	[  +0.828997] kauditd_printk_skb: 144 callbacks suppressed
	[  +5.043857] kauditd_printk_skb: 45 callbacks suppressed
	[ +11.147206] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.783874] systemd-fstab-generator[3958]: Ignoring "noauto" option for root device
	[ +21.666860] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [22d5e27a3f1a2b539ac64217ff2530d7f922691a1c792cdcc0cff2522b77d7f6] <==
	{"level":"info","ts":"2024-08-03T23:44:08.140312Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-03T23:44:08.140355Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-03T23:44:08.139839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b switched to configuration voters=(17801975325160492603)"}
	{"level":"info","ts":"2024-08-03T23:44:08.14051Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","added-peer-id":"f70d523d4475ce3b","added-peer-peer-urls":["https://192.168.39.176:2380"]}
	{"level":"info","ts":"2024-08-03T23:44:08.142398Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:44:08.144285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:44:08.197561Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-03T23:44:08.197853Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-08-03T23:44:08.197883Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-08-03T23:44:08.198083Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f70d523d4475ce3b","initial-advertise-peer-urls":["https://192.168.39.176:2380"],"listen-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.176:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-03T23:44:08.198132Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-03T23:44:09.503095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-03T23:44:09.503204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-03T23:44:09.503327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b received MsgPreVoteResp from f70d523d4475ce3b at term 2"}
	{"level":"info","ts":"2024-08-03T23:44:09.503367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became candidate at term 3"}
	{"level":"info","ts":"2024-08-03T23:44:09.503391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b received MsgVoteResp from f70d523d4475ce3b at term 3"}
	{"level":"info","ts":"2024-08-03T23:44:09.503418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became leader at term 3"}
	{"level":"info","ts":"2024-08-03T23:44:09.50346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f70d523d4475ce3b elected leader f70d523d4475ce3b at term 3"}
	{"level":"info","ts":"2024-08-03T23:44:09.509345Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f70d523d4475ce3b","local-member-attributes":"{Name:multinode-626202 ClientURLs:[https://192.168.39.176:2379]}","request-path":"/0/members/f70d523d4475ce3b/attributes","cluster-id":"40fea5b1ef9207e7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-03T23:44:09.509354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T23:44:09.509593Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-03T23:44:09.509631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-03T23:44:09.509385Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T23:44:09.511566Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-03T23:44:09.51163Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.176:2379"}
	
	
	==> etcd [661f87888da859d8f674221d096053224df6a1b79fc1b1fcc3235e71ffbd73ed] <==
	{"level":"info","ts":"2024-08-03T23:37:08.922874Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T23:37:08.923406Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T23:37:08.925288Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-03T23:37:08.932263Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-03T23:37:08.925321Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:37:08.932422Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:37:08.932472Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:37:08.926785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.176:2379"}
	{"level":"info","ts":"2024-08-03T23:37:08.953576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-03T23:38:17.008277Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.407393ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14860630938888900058 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:4e3b911a9aa70dd9>","response":"size:41"}
	{"level":"info","ts":"2024-08-03T23:38:17.009474Z","caller":"traceutil/trace.go:171","msg":"trace[1542968404] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"183.694386ms","start":"2024-08-03T23:38:16.825744Z","end":"2024-08-03T23:38:17.009438Z","steps":["trace[1542968404] 'process raft request'  (duration: 183.382637ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T23:38:22.160013Z","caller":"traceutil/trace.go:171","msg":"trace[345099284] transaction","detail":"{read_only:false; response_revision:533; number_of_response:1; }","duration":"192.113424ms","start":"2024-08-03T23:38:21.967884Z","end":"2024-08-03T23:38:22.159997Z","steps":["trace[345099284] 'process raft request'  (duration: 191.71439ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T23:39:14.271572Z","caller":"traceutil/trace.go:171","msg":"trace[494443657] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"236.741273ms","start":"2024-08-03T23:39:14.034774Z","end":"2024-08-03T23:39:14.271515Z","steps":["trace[494443657] 'process raft request'  (duration: 236.644163ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T23:39:14.27254Z","caller":"traceutil/trace.go:171","msg":"trace[1379350830] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"174.352772ms","start":"2024-08-03T23:39:14.098175Z","end":"2024-08-03T23:39:14.272528Z","steps":["trace[1379350830] 'process raft request'  (duration: 174.189875ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-03T23:42:30.459984Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-03T23:42:30.460543Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-626202","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"]}
	{"level":"warn","ts":"2024-08-03T23:42:30.460659Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-03T23:42:30.460751Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/08/03 23:42:30 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-03T23:42:30.542269Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.176:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-03T23:42:30.542365Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.176:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-03T23:42:30.543867Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f70d523d4475ce3b","current-leader-member-id":"f70d523d4475ce3b"}
	{"level":"info","ts":"2024-08-03T23:42:30.546514Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-08-03T23:42:30.546652Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-08-03T23:42:30.546677Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-626202","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"]}
	
	
	==> kernel <==
	 23:48:14 up 11 min,  0 users,  load average: 0.20, 0.31, 0.23
	Linux multinode-626202 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [52ac99500c4cf0be127c7ffcf4074ecfc93d0553178d534fd1c68f85c9bf6e0d] <==
	I0803 23:41:43.715163       1 main.go:299] handling current node
	I0803 23:41:53.712202       1 main.go:295] Handling node with IPs: map[192.168.39.198:{}]
	I0803 23:41:53.712404       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.3.0/24] 
	I0803 23:41:53.712620       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:41:53.712651       1 main.go:299] handling current node
	I0803 23:41:53.712673       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:41:53.712678       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:42:03.720080       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:42:03.720254       1 main.go:299] handling current node
	I0803 23:42:03.720322       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:42:03.720331       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:42:03.720546       1 main.go:295] Handling node with IPs: map[192.168.39.198:{}]
	I0803 23:42:03.720571       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.3.0/24] 
	I0803 23:42:13.718114       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:42:13.718160       1 main.go:299] handling current node
	I0803 23:42:13.718176       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:42:13.718181       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:42:13.718378       1 main.go:295] Handling node with IPs: map[192.168.39.198:{}]
	I0803 23:42:13.718405       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.3.0/24] 
	I0803 23:42:23.721384       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:42:23.721440       1 main.go:299] handling current node
	I0803 23:42:23.721463       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:42:23.721468       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:42:23.721617       1 main.go:295] Handling node with IPs: map[192.168.39.198:{}]
	I0803 23:42:23.721645       1 main.go:322] Node multinode-626202-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [a2dba722c179a4a30552f5af50d76eca683193b0a6484487770a3dd8bb4eee11] <==
	I0803 23:47:13.513457       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:47:23.519877       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:47:23.519977       1 main.go:299] handling current node
	I0803 23:47:23.520010       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:47:23.520029       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:47:33.514184       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:47:33.514282       1 main.go:299] handling current node
	I0803 23:47:33.514596       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:47:33.514628       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:47:43.514172       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:47:43.514341       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:47:43.514496       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:47:43.514523       1 main.go:299] handling current node
	I0803 23:47:53.516837       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:47:53.516929       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:47:53.517135       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:47:53.517165       1 main.go:299] handling current node
	I0803 23:48:03.516077       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:48:03.516433       1 main.go:299] handling current node
	I0803 23:48:03.516511       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:48:03.516534       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	I0803 23:48:13.513999       1 main.go:295] Handling node with IPs: map[192.168.39.176:{}]
	I0803 23:48:13.514062       1 main.go:299] handling current node
	I0803 23:48:13.514078       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0803 23:48:13.514083       1 main.go:322] Node multinode-626202-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [08f8e99e72584f9087e88b052f42052b62f9fb777ac3abadf19712b874c69511] <==
	I0803 23:42:30.467480       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0803 23:42:30.467726       1 available_controller.go:439] Shutting down AvailableConditionController
	I0803 23:42:30.467775       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0803 23:42:30.467796       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0803 23:42:30.467840       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0803 23:42:30.467886       1 autoregister_controller.go:165] Shutting down autoregister controller
	W0803 23:42:30.467977       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0803 23:42:30.468029       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0803 23:42:30.468074       1 establishing_controller.go:87] Shutting down EstablishingController
	I0803 23:42:30.468125       1 naming_controller.go:302] Shutting down NamingConditionController
	I0803 23:42:30.468167       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0803 23:42:30.468182       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	W0803 23:42:30.468207       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0803 23:42:30.468208       1 controller.go:129] Ending legacy_token_tracking_controller
	I0803 23:42:30.473553       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0803 23:42:30.473589       1 controller.go:167] Shutting down OpenAPI controller
	I0803 23:42:30.473673       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0803 23:42:30.473698       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0803 23:42:30.473823       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	W0803 23:42:30.474687       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0803 23:42:30.477641       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0803 23:42:30.467984       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0803 23:42:30.482717       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0803 23:42:30.482770       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0803 23:42:30.482788       1 controller.go:84] Shutting down OpenAPI AggregationController
	
	
	==> kube-apiserver [8a5fe95be143b57cff55a5e6398be7fcf431c52eba68efe1dfd18559c58d5c23] <==
	I0803 23:44:10.764046       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0803 23:44:10.861510       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0803 23:44:10.866703       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0803 23:44:10.867724       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0803 23:44:10.867820       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0803 23:44:10.867840       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0803 23:44:10.867855       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0803 23:44:10.867861       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0803 23:44:10.867931       1 shared_informer.go:320] Caches are synced for configmaps
	I0803 23:44:10.868768       1 aggregator.go:165] initial CRD sync complete...
	I0803 23:44:10.868812       1 autoregister_controller.go:141] Starting autoregister controller
	I0803 23:44:10.868819       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0803 23:44:10.868826       1 cache.go:39] Caches are synced for autoregister controller
	I0803 23:44:10.901494       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0803 23:44:10.914405       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0803 23:44:10.914427       1 policy_source.go:224] refreshing policies
	I0803 23:44:10.959969       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0803 23:44:11.773620       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0803 23:44:13.236017       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0803 23:44:13.365800       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0803 23:44:13.381680       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0803 23:44:13.465305       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0803 23:44:13.472469       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0803 23:44:23.313753       1 controller.go:615] quota admission added evaluator for: endpoints
	I0803 23:44:23.362895       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0fae4409b535bab6700793ef3f80520e5c1c30709255101d6f1268f4769a72fc] <==
	I0803 23:44:48.445651       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-626202-m02" podCIDRs=["10.244.1.0/24"]
	I0803 23:44:49.739444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.815µs"
	I0803 23:44:50.321714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.322µs"
	I0803 23:44:50.348787       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.228µs"
	I0803 23:44:50.363507       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.592µs"
	I0803 23:44:50.389656       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.764µs"
	I0803 23:44:50.397687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.507µs"
	I0803 23:44:50.400655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.358µs"
	I0803 23:45:08.329552       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:45:08.350308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.109µs"
	I0803 23:45:08.364814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.756µs"
	I0803 23:45:11.732770       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.841671ms"
	I0803 23:45:11.734161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="795.273µs"
	I0803 23:45:26.727829       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:45:27.928299       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-626202-m03\" does not exist"
	I0803 23:45:27.928399       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:45:27.951933       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-626202-m03" podCIDRs=["10.244.2.0/24"]
	I0803 23:45:47.448277       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:45:53.003094       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:46:33.480296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.505465ms"
	I0803 23:46:33.480387       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.107µs"
	I0803 23:46:43.296295       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-zv26n"
	I0803 23:46:43.326027       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-zv26n"
	I0803 23:46:43.326079       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-hs49z"
	I0803 23:46:43.349338       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-hs49z"
	
	
	==> kube-controller-manager [b7994126d209c43ce09dfc095bc483522f23d42a4b4daeb76a915d69375b1509] <==
	I0803 23:38:17.011162       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-626202-m02\" does not exist"
	I0803 23:38:17.022101       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-626202-m02" podCIDRs=["10.244.1.0/24"]
	I0803 23:38:17.364314       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-626202-m02"
	I0803 23:38:37.995640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:38:40.206416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.004554ms"
	I0803 23:38:40.233384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.892096ms"
	I0803 23:38:40.234734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.197µs"
	I0803 23:38:40.246676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.271µs"
	I0803 23:38:43.725603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.486764ms"
	I0803 23:38:43.725727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.382µs"
	I0803 23:38:44.017023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.694258ms"
	I0803 23:38:44.017481       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.074µs"
	I0803 23:39:14.275733       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:39:14.275879       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-626202-m03\" does not exist"
	I0803 23:39:14.300672       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-626202-m03" podCIDRs=["10.244.2.0/24"]
	I0803 23:39:17.389711       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-626202-m03"
	I0803 23:39:35.055020       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m03"
	I0803 23:40:03.643068       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:40:04.994094       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-626202-m03\" does not exist"
	I0803 23:40:04.994162       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:40:05.022148       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-626202-m03" podCIDRs=["10.244.3.0/24"]
	I0803 23:40:24.842040       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m02"
	I0803 23:41:02.447007       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-626202-m03"
	I0803 23:41:02.497797       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.43244ms"
	I0803 23:41:02.498801       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.731µs"
	
	
	==> kube-proxy [7e8ea75035d5cbac9baf1cb39c6ba8e1b511b73fc92cccd9044492673b33da31] <==
	I0803 23:37:29.333669       1 server_linux.go:69] "Using iptables proxy"
	I0803 23:37:29.380150       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.176"]
	I0803 23:37:29.462411       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 23:37:29.462515       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 23:37:29.462549       1 server_linux.go:165] "Using iptables Proxier"
	I0803 23:37:29.465966       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 23:37:29.466377       1 server.go:872] "Version info" version="v1.30.3"
	I0803 23:37:29.466409       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:37:29.468418       1 config.go:192] "Starting service config controller"
	I0803 23:37:29.468622       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 23:37:29.468675       1 config.go:101] "Starting endpoint slice config controller"
	I0803 23:37:29.468696       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 23:37:29.471427       1 config.go:319] "Starting node config controller"
	I0803 23:37:29.471503       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 23:37:29.569453       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 23:37:29.569584       1 shared_informer.go:320] Caches are synced for service config
	I0803 23:37:29.572668       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e933251cf98c19f63ba7dc3aca722cfe7bd5d85499591f135ec09da928b758fb] <==
	I0803 23:44:12.559150       1 server_linux.go:69] "Using iptables proxy"
	I0803 23:44:12.569656       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.176"]
	I0803 23:44:12.626838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 23:44:12.626895       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 23:44:12.626912       1 server_linux.go:165] "Using iptables Proxier"
	I0803 23:44:12.633063       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 23:44:12.633391       1 server.go:872] "Version info" version="v1.30.3"
	I0803 23:44:12.633421       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:44:12.635736       1 config.go:192] "Starting service config controller"
	I0803 23:44:12.635772       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 23:44:12.635814       1 config.go:101] "Starting endpoint slice config controller"
	I0803 23:44:12.635818       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 23:44:12.636192       1 config.go:319] "Starting node config controller"
	I0803 23:44:12.636197       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 23:44:12.736788       1 shared_informer.go:320] Caches are synced for node config
	I0803 23:44:12.736848       1 shared_informer.go:320] Caches are synced for service config
	I0803 23:44:12.736880       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [10ca9a5bb8d9cbbdb55c3431d0014d83667712c84d064ca076e3c71326bf603f] <==
	E0803 23:37:10.866785       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0803 23:37:11.725284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0803 23:37:11.725398       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0803 23:37:11.752825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0803 23:37:11.753030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0803 23:37:11.755384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 23:37:11.755427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0803 23:37:11.758582       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0803 23:37:11.758657       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0803 23:37:11.809414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0803 23:37:11.809569       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0803 23:37:11.908899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 23:37:11.909045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0803 23:37:12.013349       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 23:37:12.013399       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 23:37:12.105167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:37:12.105254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0803 23:37:12.128702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 23:37:12.128856       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0803 23:37:12.154572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0803 23:37:12.154674       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0803 23:37:12.295525       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:37:12.295572       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0803 23:37:14.255733       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0803 23:42:30.450051       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [3306098d465b998985517da6000dbb16bb97dd16e5202e0818c4d43c9ac33ced] <==
	I0803 23:44:08.437809       1 serving.go:380] Generated self-signed cert in-memory
	I0803 23:44:10.885941       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0803 23:44:10.886041       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:44:10.892285       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0803 23:44:10.892348       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0803 23:44:10.892402       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0803 23:44:10.892427       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0803 23:44:10.892460       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0803 23:44:10.892482       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0803 23:44:10.892952       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0803 23:44:10.893073       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0803 23:44:10.992643       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0803 23:44:10.992684       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0803 23:44:10.992805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.615377    3133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f3a35b2-712a-4122-a882-20045d9785bf-lib-modules\") pod \"kube-proxy-26jcw\" (UID: \"5f3a35b2-712a-4122-a882-20045d9785bf\") " pod="kube-system/kube-proxy-26jcw"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.615437    3133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e487f793-a9e4-4a2b-a25a-b474c986a645-tmp\") pod \"storage-provisioner\" (UID: \"e487f793-a9e4-4a2b-a25a-b474c986a645\") " pod="kube-system/storage-provisioner"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.615465    3133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a7697739-ef34-41a5-b70f-3f49e921a47c-cni-cfg\") pod \"kindnet-jhldg\" (UID: \"a7697739-ef34-41a5-b70f-3f49e921a47c\") " pod="kube-system/kindnet-jhldg"
	Aug 03 23:44:11 multinode-626202 kubelet[3133]: I0803 23:44:11.615479    3133 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7697739-ef34-41a5-b70f-3f49e921a47c-lib-modules\") pod \"kindnet-jhldg\" (UID: \"a7697739-ef34-41a5-b70f-3f49e921a47c\") " pod="kube-system/kindnet-jhldg"
	Aug 03 23:44:15 multinode-626202 kubelet[3133]: I0803 23:44:15.405093    3133 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 03 23:45:06 multinode-626202 kubelet[3133]: E0803 23:45:06.674206    3133 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:45:06 multinode-626202 kubelet[3133]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:45:06 multinode-626202 kubelet[3133]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:45:06 multinode-626202 kubelet[3133]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:45:06 multinode-626202 kubelet[3133]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:46:06 multinode-626202 kubelet[3133]: E0803 23:46:06.673610    3133 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:46:06 multinode-626202 kubelet[3133]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:46:06 multinode-626202 kubelet[3133]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:46:06 multinode-626202 kubelet[3133]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:46:06 multinode-626202 kubelet[3133]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:47:06 multinode-626202 kubelet[3133]: E0803 23:47:06.673433    3133 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:47:06 multinode-626202 kubelet[3133]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:47:06 multinode-626202 kubelet[3133]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:47:06 multinode-626202 kubelet[3133]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:47:06 multinode-626202 kubelet[3133]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:48:06 multinode-626202 kubelet[3133]: E0803 23:48:06.675023    3133 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:48:06 multinode-626202 kubelet[3133]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:48:06 multinode-626202 kubelet[3133]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:48:06 multinode-626202 kubelet[3133]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:48:06 multinode-626202 kubelet[3133]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 23:48:13.559466   48982 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19364-9607/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-626202 -n multinode-626202
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-626202 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.24s)

                                                
                                    
x
+
TestPreload (244.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-278819 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0803 23:53:10.665922   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:53:27.619104   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-278819 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m40.575529802s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-278819 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-278819 image pull gcr.io/k8s-minikube/busybox: (3.127169037s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-278819
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-278819: (7.296047215s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-278819 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0803 23:55:58.008050   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-278819 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m10.04308353s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-278819 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-08-03 23:56:16.562713619 +0000 UTC m=+4112.504957656
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-278819 -n test-preload-278819
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-278819 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-278819 logs -n 25: (1.099210517s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n multinode-626202 sudo cat                                       | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /home/docker/cp-test_multinode-626202-m03_multinode-626202.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-626202 cp multinode-626202-m03:/home/docker/cp-test.txt                       | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m02:/home/docker/cp-test_multinode-626202-m03_multinode-626202-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n                                                                 | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | multinode-626202-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-626202 ssh -n multinode-626202-m02 sudo cat                                   | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | /home/docker/cp-test_multinode-626202-m03_multinode-626202-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-626202 node stop m03                                                          | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	| node    | multinode-626202 node start                                                             | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:40 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-626202                                                                | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:40 UTC |                     |
	| stop    | -p multinode-626202                                                                     | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:40 UTC |                     |
	| start   | -p multinode-626202                                                                     | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:42 UTC | 03 Aug 24 23:45 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-626202                                                                | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:45 UTC |                     |
	| node    | multinode-626202 node delete                                                            | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:45 UTC | 03 Aug 24 23:45 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-626202 stop                                                                   | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:45 UTC |                     |
	| start   | -p multinode-626202                                                                     | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:48 UTC | 03 Aug 24 23:51 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-626202                                                                | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:51 UTC |                     |
	| start   | -p multinode-626202-m02                                                                 | multinode-626202-m02 | jenkins | v1.33.1 | 03 Aug 24 23:51 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-626202-m03                                                                 | multinode-626202-m03 | jenkins | v1.33.1 | 03 Aug 24 23:51 UTC | 03 Aug 24 23:52 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-626202                                                                 | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC |                     |
	| delete  | -p multinode-626202-m03                                                                 | multinode-626202-m03 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	| delete  | -p multinode-626202                                                                     | multinode-626202     | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	| start   | -p test-preload-278819                                                                  | test-preload-278819  | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:54 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-278819 image pull                                                          | test-preload-278819  | jenkins | v1.33.1 | 03 Aug 24 23:54 UTC | 03 Aug 24 23:54 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-278819                                                                  | test-preload-278819  | jenkins | v1.33.1 | 03 Aug 24 23:54 UTC | 03 Aug 24 23:55 UTC |
	| start   | -p test-preload-278819                                                                  | test-preload-278819  | jenkins | v1.33.1 | 03 Aug 24 23:55 UTC | 03 Aug 24 23:56 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-278819 image list                                                          | test-preload-278819  | jenkins | v1.33.1 | 03 Aug 24 23:56 UTC | 03 Aug 24 23:56 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:55:06
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:55:06.345552   51676 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:55:06.345807   51676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:06.345817   51676 out.go:304] Setting ErrFile to fd 2...
	I0803 23:55:06.345821   51676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:06.346002   51676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:55:06.346519   51676 out.go:298] Setting JSON to false
	I0803 23:55:06.347423   51676 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5850,"bootTime":1722723456,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:55:06.347483   51676 start.go:139] virtualization: kvm guest
	I0803 23:55:06.349750   51676 out.go:177] * [test-preload-278819] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:55:06.351316   51676 notify.go:220] Checking for updates...
	I0803 23:55:06.351357   51676 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:55:06.352814   51676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:55:06.354315   51676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:55:06.355739   51676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:55:06.357128   51676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:55:06.358625   51676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:55:06.360450   51676 config.go:182] Loaded profile config "test-preload-278819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0803 23:55:06.360829   51676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:06.360897   51676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:06.376042   51676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I0803 23:55:06.376431   51676 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:06.376963   51676 main.go:141] libmachine: Using API Version  1
	I0803 23:55:06.376983   51676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:06.377301   51676 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:06.377521   51676 main.go:141] libmachine: (test-preload-278819) Calling .DriverName
	I0803 23:55:06.379403   51676 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0803 23:55:06.380588   51676 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:55:06.380888   51676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:06.380924   51676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:06.395368   51676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45647
	I0803 23:55:06.395757   51676 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:06.396226   51676 main.go:141] libmachine: Using API Version  1
	I0803 23:55:06.396247   51676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:06.396573   51676 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:06.396766   51676 main.go:141] libmachine: (test-preload-278819) Calling .DriverName
	I0803 23:55:06.430461   51676 out.go:177] * Using the kvm2 driver based on existing profile
	I0803 23:55:06.431870   51676 start.go:297] selected driver: kvm2
	I0803 23:55:06.431888   51676 start.go:901] validating driver "kvm2" against &{Name:test-preload-278819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-278819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:55:06.431996   51676 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:55:06.432661   51676 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:55:06.432727   51676 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:55:06.446957   51676 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:55:06.447268   51676 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:55:06.447326   51676 cni.go:84] Creating CNI manager for ""
	I0803 23:55:06.447339   51676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 23:55:06.447392   51676 start.go:340] cluster config:
	{Name:test-preload-278819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-278819 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:55:06.447521   51676 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:55:06.449313   51676 out.go:177] * Starting "test-preload-278819" primary control-plane node in "test-preload-278819" cluster
	I0803 23:55:06.450782   51676 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0803 23:55:07.006297   51676 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0803 23:55:07.006337   51676 cache.go:56] Caching tarball of preloaded images
	I0803 23:55:07.006507   51676 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0803 23:55:07.008256   51676 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0803 23:55:07.009410   51676 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0803 23:55:07.120411   51676 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0803 23:55:19.585796   51676 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0803 23:55:19.585901   51676 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0803 23:55:20.421960   51676 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0803 23:55:20.422088   51676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/config.json ...
	I0803 23:55:20.422307   51676 start.go:360] acquireMachinesLock for test-preload-278819: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:55:20.422363   51676 start.go:364] duration metric: took 36.497µs to acquireMachinesLock for "test-preload-278819"
	I0803 23:55:20.422377   51676 start.go:96] Skipping create...Using existing machine configuration
	I0803 23:55:20.422382   51676 fix.go:54] fixHost starting: 
	I0803 23:55:20.422666   51676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:20.422697   51676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:20.437246   51676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44733
	I0803 23:55:20.437730   51676 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:20.438203   51676 main.go:141] libmachine: Using API Version  1
	I0803 23:55:20.438224   51676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:20.438533   51676 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:20.438735   51676 main.go:141] libmachine: (test-preload-278819) Calling .DriverName
	I0803 23:55:20.438886   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetState
	I0803 23:55:20.440710   51676 fix.go:112] recreateIfNeeded on test-preload-278819: state=Stopped err=<nil>
	I0803 23:55:20.440747   51676 main.go:141] libmachine: (test-preload-278819) Calling .DriverName
	W0803 23:55:20.440926   51676 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 23:55:20.443086   51676 out.go:177] * Restarting existing kvm2 VM for "test-preload-278819" ...
	I0803 23:55:20.444350   51676 main.go:141] libmachine: (test-preload-278819) Calling .Start
	I0803 23:55:20.444509   51676 main.go:141] libmachine: (test-preload-278819) Ensuring networks are active...
	I0803 23:55:20.445251   51676 main.go:141] libmachine: (test-preload-278819) Ensuring network default is active
	I0803 23:55:20.445653   51676 main.go:141] libmachine: (test-preload-278819) Ensuring network mk-test-preload-278819 is active
	I0803 23:55:20.446040   51676 main.go:141] libmachine: (test-preload-278819) Getting domain xml...
	I0803 23:55:20.446797   51676 main.go:141] libmachine: (test-preload-278819) Creating domain...
	I0803 23:55:21.636338   51676 main.go:141] libmachine: (test-preload-278819) Waiting to get IP...
	I0803 23:55:21.637209   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:21.637605   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:21.637675   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:21.637577   51760 retry.go:31] will retry after 259.219305ms: waiting for machine to come up
	I0803 23:55:21.898103   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:21.898544   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:21.898571   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:21.898502   51760 retry.go:31] will retry after 303.315169ms: waiting for machine to come up
	I0803 23:55:22.203982   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:22.204329   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:22.204351   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:22.204300   51760 retry.go:31] will retry after 386.196591ms: waiting for machine to come up
	I0803 23:55:22.591718   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:22.592055   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:22.592074   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:22.592003   51760 retry.go:31] will retry after 515.215675ms: waiting for machine to come up
	I0803 23:55:23.108432   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:23.108825   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:23.108848   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:23.108778   51760 retry.go:31] will retry after 597.700019ms: waiting for machine to come up
	I0803 23:55:23.708550   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:23.708909   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:23.708932   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:23.708865   51760 retry.go:31] will retry after 603.811005ms: waiting for machine to come up
	I0803 23:55:24.314627   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:24.314909   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:24.314930   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:24.314851   51760 retry.go:31] will retry after 981.337464ms: waiting for machine to come up
	I0803 23:55:25.298047   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:25.298470   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:25.298491   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:25.298432   51760 retry.go:31] will retry after 1.094633202s: waiting for machine to come up
	I0803 23:55:26.394980   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:26.395359   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:26.395387   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:26.395300   51760 retry.go:31] will retry after 1.527180738s: waiting for machine to come up
	I0803 23:55:27.924949   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:27.925422   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:27.925451   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:27.925371   51760 retry.go:31] will retry after 1.831049084s: waiting for machine to come up
	I0803 23:55:29.759401   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:29.759775   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:29.759805   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:29.759726   51760 retry.go:31] will retry after 2.325534806s: waiting for machine to come up
	I0803 23:55:32.087779   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:32.088199   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:32.088226   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:32.088152   51760 retry.go:31] will retry after 3.210876845s: waiting for machine to come up
	I0803 23:55:35.301663   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:35.302034   51676 main.go:141] libmachine: (test-preload-278819) DBG | unable to find current IP address of domain test-preload-278819 in network mk-test-preload-278819
	I0803 23:55:35.302090   51676 main.go:141] libmachine: (test-preload-278819) DBG | I0803 23:55:35.302001   51760 retry.go:31] will retry after 4.397016883s: waiting for machine to come up
	I0803 23:55:39.703652   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:39.704046   51676 main.go:141] libmachine: (test-preload-278819) Found IP for machine: 192.168.39.129
	I0803 23:55:39.704075   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has current primary IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:39.704082   51676 main.go:141] libmachine: (test-preload-278819) Reserving static IP address...
	I0803 23:55:39.704406   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "test-preload-278819", mac: "52:54:00:66:14:33", ip: "192.168.39.129"} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:39.704425   51676 main.go:141] libmachine: (test-preload-278819) DBG | skip adding static IP to network mk-test-preload-278819 - found existing host DHCP lease matching {name: "test-preload-278819", mac: "52:54:00:66:14:33", ip: "192.168.39.129"}
	I0803 23:55:39.704434   51676 main.go:141] libmachine: (test-preload-278819) Reserved static IP address: 192.168.39.129
	I0803 23:55:39.704445   51676 main.go:141] libmachine: (test-preload-278819) Waiting for SSH to be available...
	I0803 23:55:39.704456   51676 main.go:141] libmachine: (test-preload-278819) DBG | Getting to WaitForSSH function...
	I0803 23:55:39.706537   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:39.706791   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:39.706814   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:39.706910   51676 main.go:141] libmachine: (test-preload-278819) DBG | Using SSH client type: external
	I0803 23:55:39.706936   51676 main.go:141] libmachine: (test-preload-278819) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/test-preload-278819/id_rsa (-rw-------)
	I0803 23:55:39.706965   51676 main.go:141] libmachine: (test-preload-278819) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/test-preload-278819/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:55:39.706981   51676 main.go:141] libmachine: (test-preload-278819) DBG | About to run SSH command:
	I0803 23:55:39.706998   51676 main.go:141] libmachine: (test-preload-278819) DBG | exit 0
	I0803 23:55:39.829615   51676 main.go:141] libmachine: (test-preload-278819) DBG | SSH cmd err, output: <nil>: 
	I0803 23:55:39.829960   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetConfigRaw
	I0803 23:55:39.830564   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetIP
	I0803 23:55:39.833009   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:39.833376   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:39.833405   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:39.833651   51676 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/config.json ...
	I0803 23:55:39.833867   51676 machine.go:94] provisionDockerMachine start ...
	I0803 23:55:39.833893   51676 main.go:141] libmachine: (test-preload-278819) Calling .DriverName
	I0803 23:55:39.834093   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHHostname
	I0803 23:55:39.836079   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:39.836451   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:39.836480   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:39.836583   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHPort
	I0803 23:55:39.836736   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:39.836869   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:39.837004   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHUsername
	I0803 23:55:39.837157   51676 main.go:141] libmachine: Using SSH client type: native
	I0803 23:55:39.837381   51676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0803 23:55:39.837394   51676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 23:55:39.937718   51676 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0803 23:55:39.937742   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetMachineName
	I0803 23:55:39.938000   51676 buildroot.go:166] provisioning hostname "test-preload-278819"
	I0803 23:55:39.938021   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetMachineName
	I0803 23:55:39.938203   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHHostname
	I0803 23:55:39.940940   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:39.941280   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:39.941306   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:39.941460   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHPort
	I0803 23:55:39.941621   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:39.941786   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:39.941882   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHUsername
	I0803 23:55:39.941990   51676 main.go:141] libmachine: Using SSH client type: native
	I0803 23:55:39.942196   51676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0803 23:55:39.942211   51676 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-278819 && echo "test-preload-278819" | sudo tee /etc/hostname
	I0803 23:55:40.055833   51676 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-278819
	
	I0803 23:55:40.055863   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHHostname
	I0803 23:55:40.058700   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:40.058972   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:40.059000   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:40.059166   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHPort
	I0803 23:55:40.059351   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:40.059512   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:40.059620   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHUsername
	I0803 23:55:40.059758   51676 main.go:141] libmachine: Using SSH client type: native
	I0803 23:55:40.059987   51676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0803 23:55:40.060006   51676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-278819' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-278819/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-278819' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:55:40.170665   51676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:55:40.170693   51676 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 23:55:40.170745   51676 buildroot.go:174] setting up certificates
	I0803 23:55:40.170756   51676 provision.go:84] configureAuth start
	I0803 23:55:40.170770   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetMachineName
	I0803 23:55:40.171070   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetIP
	I0803 23:55:40.173710   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:40.174051   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:40.174076   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:40.174262   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHHostname
	I0803 23:55:40.176107   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:40.176383   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:40.176424   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:40.176480   51676 provision.go:143] copyHostCerts
	I0803 23:55:40.176556   51676 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0803 23:55:40.176572   51676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:55:40.176654   51676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 23:55:40.176786   51676 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0803 23:55:40.176798   51676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:55:40.176839   51676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 23:55:40.176929   51676 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0803 23:55:40.176940   51676 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:55:40.176978   51676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 23:55:40.177049   51676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.test-preload-278819 san=[127.0.0.1 192.168.39.129 localhost minikube test-preload-278819]
	I0803 23:55:40.465168   51676 provision.go:177] copyRemoteCerts
	I0803 23:55:40.465236   51676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:55:40.465268   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHHostname
	I0803 23:55:40.467859   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:40.468249   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:40.468279   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:40.468460   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHPort
	I0803 23:55:40.468670   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:40.468787   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHUsername
	I0803 23:55:40.469012   51676 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/test-preload-278819/id_rsa Username:docker}
	I0803 23:55:40.552308   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 23:55:40.577396   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0803 23:55:40.600901   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:55:40.624825   51676 provision.go:87] duration metric: took 454.055447ms to configureAuth
	I0803 23:55:40.624857   51676 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:55:40.625083   51676 config.go:182] Loaded profile config "test-preload-278819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0803 23:55:40.625155   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHHostname
	I0803 23:55:40.628033   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:40.628419   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:40.628448   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:40.628622   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHPort
	I0803 23:55:40.628855   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:40.629053   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:40.629226   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHUsername
	I0803 23:55:40.629389   51676 main.go:141] libmachine: Using SSH client type: native
	I0803 23:55:40.629621   51676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0803 23:55:40.629643   51676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:55:40.897195   51676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:55:40.897221   51676 machine.go:97] duration metric: took 1.063341468s to provisionDockerMachine
	I0803 23:55:40.897236   51676 start.go:293] postStartSetup for "test-preload-278819" (driver="kvm2")
	I0803 23:55:40.897249   51676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:55:40.897265   51676 main.go:141] libmachine: (test-preload-278819) Calling .DriverName
	I0803 23:55:40.897588   51676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:55:40.897615   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHHostname
	I0803 23:55:40.900347   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:40.900680   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:40.900706   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:40.900842   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHPort
	I0803 23:55:40.901025   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:40.901175   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHUsername
	I0803 23:55:40.901302   51676 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/test-preload-278819/id_rsa Username:docker}
	I0803 23:55:40.984588   51676 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:55:40.989255   51676 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:55:40.989277   51676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 23:55:40.989330   51676 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 23:55:40.989424   51676 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0803 23:55:40.989530   51676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:55:41.000095   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:55:41.024964   51676 start.go:296] duration metric: took 127.712261ms for postStartSetup
	I0803 23:55:41.025005   51676 fix.go:56] duration metric: took 20.602622799s for fixHost
	I0803 23:55:41.025029   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHHostname
	I0803 23:55:41.027466   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:41.027750   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:41.027773   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:41.027870   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHPort
	I0803 23:55:41.028074   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:41.028219   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:41.028375   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHUsername
	I0803 23:55:41.028534   51676 main.go:141] libmachine: Using SSH client type: native
	I0803 23:55:41.028695   51676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0803 23:55:41.028704   51676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:55:41.130458   51676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722729341.106398304
	
	I0803 23:55:41.130478   51676 fix.go:216] guest clock: 1722729341.106398304
	I0803 23:55:41.130487   51676 fix.go:229] Guest: 2024-08-03 23:55:41.106398304 +0000 UTC Remote: 2024-08-03 23:55:41.025010383 +0000 UTC m=+34.712699718 (delta=81.387921ms)
	I0803 23:55:41.130531   51676 fix.go:200] guest clock delta is within tolerance: 81.387921ms
	I0803 23:55:41.130540   51676 start.go:83] releasing machines lock for "test-preload-278819", held for 20.708167108s
	I0803 23:55:41.130570   51676 main.go:141] libmachine: (test-preload-278819) Calling .DriverName
	I0803 23:55:41.130812   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetIP
	I0803 23:55:41.133216   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:41.133534   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:41.133570   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:41.133702   51676 main.go:141] libmachine: (test-preload-278819) Calling .DriverName
	I0803 23:55:41.134231   51676 main.go:141] libmachine: (test-preload-278819) Calling .DriverName
	I0803 23:55:41.134418   51676 main.go:141] libmachine: (test-preload-278819) Calling .DriverName
	I0803 23:55:41.134523   51676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:55:41.134560   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHHostname
	I0803 23:55:41.134608   51676 ssh_runner.go:195] Run: cat /version.json
	I0803 23:55:41.134628   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHHostname
	I0803 23:55:41.137041   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:41.137113   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:41.137373   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:41.137400   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:41.137500   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:41.137523   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:41.137503   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHPort
	I0803 23:55:41.137657   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHPort
	I0803 23:55:41.137722   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:41.137786   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:55:41.137843   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHUsername
	I0803 23:55:41.137896   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHUsername
	I0803 23:55:41.137956   51676 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/test-preload-278819/id_rsa Username:docker}
	I0803 23:55:41.138021   51676 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/test-preload-278819/id_rsa Username:docker}
	I0803 23:55:41.214435   51676 ssh_runner.go:195] Run: systemctl --version
	I0803 23:55:41.231740   51676 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:55:41.388981   51676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:55:41.394839   51676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:55:41.394905   51676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:55:41.411040   51676 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:55:41.411064   51676 start.go:495] detecting cgroup driver to use...
	I0803 23:55:41.411126   51676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:55:41.427002   51676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:55:41.440887   51676 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:55:41.440960   51676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:55:41.454554   51676 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:55:41.468316   51676 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:55:41.578741   51676 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:55:41.705859   51676 docker.go:233] disabling docker service ...
	I0803 23:55:41.705934   51676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:55:41.720728   51676 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:55:41.733532   51676 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:55:41.861263   51676 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:55:41.970418   51676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:55:41.985042   51676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:55:42.004069   51676 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0803 23:55:42.004151   51676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:55:42.015280   51676 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:55:42.015338   51676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:55:42.026342   51676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:55:42.037257   51676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:55:42.047836   51676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:55:42.058854   51676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:55:42.070086   51676 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:55:42.088445   51676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:55:42.098842   51676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:55:42.108281   51676 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:55:42.108338   51676 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:55:42.122214   51676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:55:42.132380   51676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:55:42.237905   51676 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:55:42.369597   51676 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:55:42.369664   51676 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:55:42.375256   51676 start.go:563] Will wait 60s for crictl version
	I0803 23:55:42.375340   51676 ssh_runner.go:195] Run: which crictl
	I0803 23:55:42.379247   51676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:55:42.418454   51676 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:55:42.418512   51676 ssh_runner.go:195] Run: crio --version
	I0803 23:55:42.447072   51676 ssh_runner.go:195] Run: crio --version
	I0803 23:55:42.477974   51676 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0803 23:55:42.479620   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetIP
	I0803 23:55:42.482090   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:42.482416   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:55:42.482444   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:55:42.482617   51676 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:55:42.487005   51676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:55:42.499960   51676 kubeadm.go:883] updating cluster {Name:test-preload-278819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-278819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:55:42.500061   51676 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0803 23:55:42.500110   51676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:55:42.536302   51676 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0803 23:55:42.536359   51676 ssh_runner.go:195] Run: which lz4
	I0803 23:55:42.540525   51676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0803 23:55:42.544639   51676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 23:55:42.544675   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0803 23:55:44.092804   51676 crio.go:462] duration metric: took 1.552321306s to copy over tarball
	I0803 23:55:44.092874   51676 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 23:55:46.484157   51676 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.391259802s)
	I0803 23:55:46.484180   51676 crio.go:469] duration metric: took 2.391353834s to extract the tarball
	I0803 23:55:46.484187   51676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 23:55:46.527333   51676 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:55:46.580285   51676 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0803 23:55:46.580305   51676 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0803 23:55:46.580385   51676 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0803 23:55:46.580414   51676 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0803 23:55:46.580426   51676 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0803 23:55:46.580360   51676 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:55:46.580473   51676 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0803 23:55:46.580428   51676 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0803 23:55:46.580435   51676 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 23:55:46.580392   51676 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0803 23:55:46.582003   51676 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0803 23:55:46.582020   51676 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0803 23:55:46.582026   51676 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0803 23:55:46.582034   51676 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:55:46.582011   51676 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0803 23:55:46.582012   51676 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0803 23:55:46.582016   51676 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 23:55:46.582329   51676 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0803 23:55:46.714708   51676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0803 23:55:46.731660   51676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0803 23:55:46.734134   51676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0803 23:55:46.744038   51676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0803 23:55:46.746915   51676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0803 23:55:46.777413   51676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0803 23:55:46.777461   51676 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0803 23:55:46.777513   51676 ssh_runner.go:195] Run: which crictl
	I0803 23:55:46.786345   51676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0803 23:55:46.858076   51676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0803 23:55:46.858117   51676 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0803 23:55:46.858130   51676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0803 23:55:46.858160   51676 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0803 23:55:46.858175   51676 ssh_runner.go:195] Run: which crictl
	I0803 23:55:46.858197   51676 ssh_runner.go:195] Run: which crictl
	I0803 23:55:46.858220   51676 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0803 23:55:46.858248   51676 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0803 23:55:46.858289   51676 ssh_runner.go:195] Run: which crictl
	I0803 23:55:46.864066   51676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0803 23:55:46.864108   51676 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0803 23:55:46.864162   51676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0803 23:55:46.864189   51676 ssh_runner.go:195] Run: which crictl
	I0803 23:55:46.889691   51676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0803 23:55:46.889730   51676 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0803 23:55:46.889736   51676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0803 23:55:46.889774   51676 ssh_runner.go:195] Run: which crictl
	I0803 23:55:46.889800   51676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0803 23:55:46.889852   51676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0803 23:55:46.889699   51676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0803 23:55:46.931326   51676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0803 23:55:46.946462   51676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0803 23:55:46.946588   51676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0803 23:55:46.995481   51676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0803 23:55:46.995587   51676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0803 23:55:47.017963   51676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0803 23:55:47.018053   51676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0803 23:55:47.018089   51676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0803 23:55:47.018156   51676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0803 23:55:47.021684   51676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0803 23:55:47.021780   51676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0803 23:55:47.021789   51676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0803 23:55:47.048543   51676 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0803 23:55:47.048564   51676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0803 23:55:47.048579   51676 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0803 23:55:47.048590   51676 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0803 23:55:47.048602   51676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0803 23:55:47.048635   51676 ssh_runner.go:195] Run: which crictl
	I0803 23:55:47.048662   51676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0803 23:55:47.048664   51676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0803 23:55:47.048637   51676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0803 23:55:47.048697   51676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0803 23:55:47.075289   51676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0803 23:55:47.075399   51676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0803 23:55:47.466172   51676 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:55:50.119692   51676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.07100023s)
	I0803 23:55:50.119730   51676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0803 23:55:50.119751   51676 ssh_runner.go:235] Completed: which crictl: (3.07109594s)
	I0803 23:55:50.119764   51676 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0803 23:55:50.119798   51676 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.044381293s)
	I0803 23:55:50.119817   51676 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0803 23:55:50.119824   51676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0803 23:55:50.119830   51676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0803 23:55:50.119862   51676 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.653657526s)
	I0803 23:55:50.480818   51676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0803 23:55:50.480864   51676 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0803 23:55:50.480917   51676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0803 23:55:50.480934   51676 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0803 23:55:50.481054   51676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0803 23:55:51.321202   51676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0803 23:55:51.321247   51676 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0803 23:55:51.321299   51676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0803 23:55:51.321326   51676 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0803 23:55:53.473405   51676 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.152078993s)
	I0803 23:55:53.473438   51676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0803 23:55:53.473475   51676 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0803 23:55:53.473539   51676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0803 23:55:54.225559   51676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0803 23:55:54.225599   51676 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0803 23:55:54.225640   51676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0803 23:55:54.666069   51676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0803 23:55:54.666122   51676 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0803 23:55:54.666175   51676 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0803 23:55:54.813422   51676 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0803 23:55:54.813468   51676 cache_images.go:123] Successfully loaded all cached images
	I0803 23:55:54.813475   51676 cache_images.go:92] duration metric: took 8.233158412s to LoadCachedImages
	I0803 23:55:54.813490   51676 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.24.4 crio true true} ...
	I0803 23:55:54.813602   51676 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-278819 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-278819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:55:54.813679   51676 ssh_runner.go:195] Run: crio config
	I0803 23:55:54.864081   51676 cni.go:84] Creating CNI manager for ""
	I0803 23:55:54.864106   51676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 23:55:54.864123   51676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:55:54.864142   51676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-278819 NodeName:test-preload-278819 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:55:54.864278   51676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-278819"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:55:54.864342   51676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0803 23:55:54.875139   51676 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:55:54.875206   51676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 23:55:54.884774   51676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0803 23:55:54.901489   51676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:55:54.918094   51676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0803 23:55:54.935276   51676 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0803 23:55:54.939261   51676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:55:54.952270   51676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:55:55.062233   51676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:55:55.079093   51676 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819 for IP: 192.168.39.129
	I0803 23:55:55.079118   51676 certs.go:194] generating shared ca certs ...
	I0803 23:55:55.079145   51676 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:55:55.079354   51676 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 23:55:55.079430   51676 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 23:55:55.079444   51676 certs.go:256] generating profile certs ...
	I0803 23:55:55.079563   51676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/client.key
	I0803 23:55:55.079655   51676 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/apiserver.key.6e10421a
	I0803 23:55:55.079707   51676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/proxy-client.key
	I0803 23:55:55.079865   51676 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0803 23:55:55.079897   51676 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0803 23:55:55.079907   51676 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 23:55:55.079928   51676 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 23:55:55.079952   51676 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:55:55.079974   51676 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 23:55:55.080030   51676 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:55:55.080768   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:55:55.147470   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:55:55.185984   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:55:55.214681   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 23:55:55.243211   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0803 23:55:55.277080   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 23:55:55.314927   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:55:55.339382   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:55:55.363843   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0803 23:55:55.388597   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0803 23:55:55.411700   51676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:55:55.435571   51676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:55:55.453141   51676 ssh_runner.go:195] Run: openssl version
	I0803 23:55:55.459087   51676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0803 23:55:55.469965   51676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0803 23:55:55.474369   51676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:55:55.474415   51676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0803 23:55:55.480488   51676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:55:55.491169   51676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:55:55.501935   51676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:55:55.506451   51676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:55:55.506510   51676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:55:55.511891   51676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:55:55.522220   51676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0803 23:55:55.532611   51676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0803 23:55:55.536898   51676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:55:55.536947   51676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0803 23:55:55.542458   51676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0803 23:55:55.553270   51676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:55:55.557661   51676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 23:55:55.563672   51676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 23:55:55.569421   51676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 23:55:55.575398   51676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 23:55:55.580792   51676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 23:55:55.586503   51676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 23:55:55.592091   51676 kubeadm.go:392] StartCluster: {Name:test-preload-278819 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-278819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:55:55.592172   51676 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:55:55.592212   51676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:55:55.631097   51676 cri.go:89] found id: ""
	I0803 23:55:55.631190   51676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 23:55:55.641989   51676 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0803 23:55:55.642014   51676 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0803 23:55:55.642065   51676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0803 23:55:55.652241   51676 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:55.652653   51676 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-278819" does not appear in /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:55:55.652754   51676 kubeconfig.go:62] /home/jenkins/minikube-integration/19364-9607/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-278819" cluster setting kubeconfig missing "test-preload-278819" context setting]
	I0803 23:55:55.653013   51676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:55:55.653667   51676 kapi.go:59] client config for test-preload-278819: &rest.Config{Host:"https://192.168.39.129:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/client.key", CAFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 23:55:55.654340   51676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0803 23:55:55.664042   51676 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.129
	I0803 23:55:55.664084   51676 kubeadm.go:1160] stopping kube-system containers ...
	I0803 23:55:55.664106   51676 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0803 23:55:55.664158   51676 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:55:55.700268   51676 cri.go:89] found id: ""
	I0803 23:55:55.700345   51676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0803 23:55:55.717891   51676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 23:55:55.728248   51676 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 23:55:55.728265   51676 kubeadm.go:157] found existing configuration files:
	
	I0803 23:55:55.728308   51676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0803 23:55:55.738354   51676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 23:55:55.738417   51676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 23:55:55.748575   51676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0803 23:55:55.758276   51676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 23:55:55.758332   51676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 23:55:55.768390   51676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0803 23:55:55.777981   51676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 23:55:55.778044   51676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 23:55:55.787953   51676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0803 23:55:55.797400   51676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 23:55:55.797452   51676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 23:55:55.807544   51676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 23:55:55.817634   51676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 23:55:55.912788   51676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 23:55:56.713247   51676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0803 23:55:56.978908   51676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 23:55:57.057711   51676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0803 23:55:57.194190   51676 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:55:57.194298   51676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:57.695318   51676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:58.194493   51676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:58.215437   51676 api_server.go:72] duration metric: took 1.021246209s to wait for apiserver process to appear ...
	I0803 23:55:58.215463   51676 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:55:58.215486   51676 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0803 23:55:58.215970   51676 api_server.go:269] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0803 23:55:58.715731   51676 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0803 23:56:02.381040   51676 api_server.go:279] https://192.168.39.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0803 23:56:02.381068   51676 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0803 23:56:02.381081   51676 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0803 23:56:02.461466   51676 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0803 23:56:02.461497   51676 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0803 23:56:02.715813   51676 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0803 23:56:02.725289   51676 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0803 23:56:02.725325   51676 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0803 23:56:03.215871   51676 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0803 23:56:03.222501   51676 api_server.go:279] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0803 23:56:03.222537   51676 api_server.go:103] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0803 23:56:03.716060   51676 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0803 23:56:03.721635   51676 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0803 23:56:03.727423   51676 api_server.go:141] control plane version: v1.24.4
	I0803 23:56:03.727448   51676 api_server.go:131] duration metric: took 5.511978296s to wait for apiserver health ...
	I0803 23:56:03.727457   51676 cni.go:84] Creating CNI manager for ""
	I0803 23:56:03.727463   51676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 23:56:03.729349   51676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 23:56:03.730518   51676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 23:56:03.742208   51676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 23:56:03.761465   51676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 23:56:03.770431   51676 system_pods.go:59] 8 kube-system pods found
	I0803 23:56:03.770463   51676 system_pods.go:61] "coredns-6d4b75cb6d-9lzjx" [16050b3a-cc02-4346-b79b-ae1c23ccac85] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0803 23:56:03.770472   51676 system_pods.go:61] "coredns-6d4b75cb6d-h5l7p" [a01084f2-a4e2-4eab-a8a3-0eba7c37e220] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0803 23:56:03.770478   51676 system_pods.go:61] "etcd-test-preload-278819" [260dd4a7-a88f-4ab8-9ded-e5132d0baf47] Running
	I0803 23:56:03.770483   51676 system_pods.go:61] "kube-apiserver-test-preload-278819" [729babeb-bb57-48c3-8f2f-161e2c49d74f] Running
	I0803 23:56:03.770489   51676 system_pods.go:61] "kube-controller-manager-test-preload-278819" [d6518a89-9aa4-4e60-a7d1-d11f3ef7f064] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0803 23:56:03.770494   51676 system_pods.go:61] "kube-proxy-lbg62" [a592e7c3-d7d5-4938-a49f-7034f6aba338] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0803 23:56:03.770499   51676 system_pods.go:61] "kube-scheduler-test-preload-278819" [b0d0ffc5-6cbc-45d2-9ccf-89d93e79a77b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0803 23:56:03.770505   51676 system_pods.go:61] "storage-provisioner" [44b30521-aa9d-4ead-a77d-e94a940cabfe] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0803 23:56:03.770512   51676 system_pods.go:74] duration metric: took 9.023006ms to wait for pod list to return data ...
	I0803 23:56:03.770522   51676 node_conditions.go:102] verifying NodePressure condition ...
	I0803 23:56:03.773725   51676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:56:03.773749   51676 node_conditions.go:123] node cpu capacity is 2
	I0803 23:56:03.773758   51676 node_conditions.go:105] duration metric: took 3.231746ms to run NodePressure ...
	I0803 23:56:03.773774   51676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0803 23:56:03.962188   51676 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0803 23:56:03.966093   51676 kubeadm.go:739] kubelet initialised
	I0803 23:56:03.966112   51676 kubeadm.go:740] duration metric: took 3.90292ms waiting for restarted kubelet to initialise ...
	I0803 23:56:03.966119   51676 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:56:03.970707   51676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-9lzjx" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:03.975071   51676 pod_ready.go:97] node "test-preload-278819" hosting pod "coredns-6d4b75cb6d-9lzjx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:03.975101   51676 pod_ready.go:81] duration metric: took 4.371068ms for pod "coredns-6d4b75cb6d-9lzjx" in "kube-system" namespace to be "Ready" ...
	E0803 23:56:03.975110   51676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-278819" hosting pod "coredns-6d4b75cb6d-9lzjx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:03.975116   51676 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-h5l7p" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:03.978930   51676 pod_ready.go:97] node "test-preload-278819" hosting pod "coredns-6d4b75cb6d-h5l7p" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:03.978948   51676 pod_ready.go:81] duration metric: took 3.823129ms for pod "coredns-6d4b75cb6d-h5l7p" in "kube-system" namespace to be "Ready" ...
	E0803 23:56:03.978958   51676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-278819" hosting pod "coredns-6d4b75cb6d-h5l7p" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:03.978965   51676 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:03.987187   51676 pod_ready.go:97] node "test-preload-278819" hosting pod "etcd-test-preload-278819" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:03.987209   51676 pod_ready.go:81] duration metric: took 8.234334ms for pod "etcd-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	E0803 23:56:03.987218   51676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-278819" hosting pod "etcd-test-preload-278819" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:03.987225   51676 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:04.165527   51676 pod_ready.go:97] node "test-preload-278819" hosting pod "kube-apiserver-test-preload-278819" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:04.165561   51676 pod_ready.go:81] duration metric: took 178.327091ms for pod "kube-apiserver-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	E0803 23:56:04.165571   51676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-278819" hosting pod "kube-apiserver-test-preload-278819" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:04.165579   51676 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:04.567963   51676 pod_ready.go:97] node "test-preload-278819" hosting pod "kube-controller-manager-test-preload-278819" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:04.567997   51676 pod_ready.go:81] duration metric: took 402.406994ms for pod "kube-controller-manager-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	E0803 23:56:04.568010   51676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-278819" hosting pod "kube-controller-manager-test-preload-278819" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:04.568026   51676 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lbg62" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:04.965024   51676 pod_ready.go:97] node "test-preload-278819" hosting pod "kube-proxy-lbg62" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:04.965052   51676 pod_ready.go:81] duration metric: took 397.013334ms for pod "kube-proxy-lbg62" in "kube-system" namespace to be "Ready" ...
	E0803 23:56:04.965061   51676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-278819" hosting pod "kube-proxy-lbg62" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:04.965067   51676 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:05.365332   51676 pod_ready.go:97] node "test-preload-278819" hosting pod "kube-scheduler-test-preload-278819" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:05.365382   51676 pod_ready.go:81] duration metric: took 400.307243ms for pod "kube-scheduler-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	E0803 23:56:05.365395   51676 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-278819" hosting pod "kube-scheduler-test-preload-278819" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:05.365411   51676 pod_ready.go:38] duration metric: took 1.399283307s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:56:05.365436   51676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 23:56:05.377824   51676 ops.go:34] apiserver oom_adj: -16
	I0803 23:56:05.377847   51676 kubeadm.go:597] duration metric: took 9.735826697s to restartPrimaryControlPlane
	I0803 23:56:05.377859   51676 kubeadm.go:394] duration metric: took 9.785776247s to StartCluster
	I0803 23:56:05.377879   51676 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:56:05.377957   51676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:56:05.378544   51676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:56:05.378795   51676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:56:05.378866   51676 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 23:56:05.378956   51676 addons.go:69] Setting storage-provisioner=true in profile "test-preload-278819"
	I0803 23:56:05.379018   51676 config.go:182] Loaded profile config "test-preload-278819": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0803 23:56:05.379025   51676 addons.go:234] Setting addon storage-provisioner=true in "test-preload-278819"
	W0803 23:56:05.379079   51676 addons.go:243] addon storage-provisioner should already be in state true
	I0803 23:56:05.379112   51676 host.go:66] Checking if "test-preload-278819" exists ...
	I0803 23:56:05.378986   51676 addons.go:69] Setting default-storageclass=true in profile "test-preload-278819"
	I0803 23:56:05.379182   51676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-278819"
	I0803 23:56:05.379417   51676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:05.379455   51676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:05.379536   51676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:05.379578   51676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:05.381177   51676 out.go:177] * Verifying Kubernetes components...
	I0803 23:56:05.382236   51676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:56:05.394893   51676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35937
	I0803 23:56:05.394933   51676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0803 23:56:05.395346   51676 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:05.395554   51676 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:05.395873   51676 main.go:141] libmachine: Using API Version  1
	I0803 23:56:05.395891   51676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:05.395997   51676 main.go:141] libmachine: Using API Version  1
	I0803 23:56:05.396020   51676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:05.396254   51676 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:05.396308   51676 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:05.396451   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetState
	I0803 23:56:05.396805   51676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:05.396841   51676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:05.398574   51676 kapi.go:59] client config for test-preload-278819: &rest.Config{Host:"https://192.168.39.129:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/profiles/test-preload-278819/client.key", CAFile:"/home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 23:56:05.398819   51676 addons.go:234] Setting addon default-storageclass=true in "test-preload-278819"
	W0803 23:56:05.398840   51676 addons.go:243] addon default-storageclass should already be in state true
	I0803 23:56:05.398884   51676 host.go:66] Checking if "test-preload-278819" exists ...
	I0803 23:56:05.399217   51676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:05.399266   51676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:05.411003   51676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0803 23:56:05.411464   51676 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:05.411940   51676 main.go:141] libmachine: Using API Version  1
	I0803 23:56:05.411960   51676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:05.412315   51676 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:05.412497   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetState
	I0803 23:56:05.413595   51676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0803 23:56:05.413980   51676 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:05.414507   51676 main.go:141] libmachine: Using API Version  1
	I0803 23:56:05.414532   51676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:05.414546   51676 main.go:141] libmachine: (test-preload-278819) Calling .DriverName
	I0803 23:56:05.414832   51676 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:05.415373   51676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:05.415420   51676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:05.416568   51676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:56:05.418029   51676 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:56:05.418048   51676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 23:56:05.418064   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHHostname
	I0803 23:56:05.421260   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:56:05.421715   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:56:05.421734   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:56:05.421897   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHPort
	I0803 23:56:05.422057   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:56:05.422258   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHUsername
	I0803 23:56:05.422419   51676 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/test-preload-278819/id_rsa Username:docker}
	I0803 23:56:05.432216   51676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38047
	I0803 23:56:05.432629   51676 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:05.433148   51676 main.go:141] libmachine: Using API Version  1
	I0803 23:56:05.433166   51676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:05.433519   51676 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:05.433733   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetState
	I0803 23:56:05.435217   51676 main.go:141] libmachine: (test-preload-278819) Calling .DriverName
	I0803 23:56:05.435417   51676 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 23:56:05.435433   51676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 23:56:05.435452   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHHostname
	I0803 23:56:05.437802   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:56:05.438210   51676 main.go:141] libmachine: (test-preload-278819) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:14:33", ip: ""} in network mk-test-preload-278819: {Iface:virbr1 ExpiryTime:2024-08-04 00:55:31 +0000 UTC Type:0 Mac:52:54:00:66:14:33 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-278819 Clientid:01:52:54:00:66:14:33}
	I0803 23:56:05.438231   51676 main.go:141] libmachine: (test-preload-278819) DBG | domain test-preload-278819 has defined IP address 192.168.39.129 and MAC address 52:54:00:66:14:33 in network mk-test-preload-278819
	I0803 23:56:05.438413   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHPort
	I0803 23:56:05.438611   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHKeyPath
	I0803 23:56:05.438770   51676 main.go:141] libmachine: (test-preload-278819) Calling .GetSSHUsername
	I0803 23:56:05.438925   51676 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/test-preload-278819/id_rsa Username:docker}
	I0803 23:56:05.548264   51676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:56:05.568626   51676 node_ready.go:35] waiting up to 6m0s for node "test-preload-278819" to be "Ready" ...
	I0803 23:56:05.624141   51676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:56:05.644155   51676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:56:06.586332   51676 main.go:141] libmachine: Making call to close driver server
	I0803 23:56:06.586350   51676 main.go:141] libmachine: Making call to close driver server
	I0803 23:56:06.586364   51676 main.go:141] libmachine: (test-preload-278819) Calling .Close
	I0803 23:56:06.586354   51676 main.go:141] libmachine: (test-preload-278819) Calling .Close
	I0803 23:56:06.586744   51676 main.go:141] libmachine: (test-preload-278819) DBG | Closing plugin on server side
	I0803 23:56:06.586751   51676 main.go:141] libmachine: (test-preload-278819) DBG | Closing plugin on server side
	I0803 23:56:06.586751   51676 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:56:06.586760   51676 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:56:06.586772   51676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:56:06.586775   51676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:56:06.586783   51676 main.go:141] libmachine: Making call to close driver server
	I0803 23:56:06.586784   51676 main.go:141] libmachine: Making call to close driver server
	I0803 23:56:06.586793   51676 main.go:141] libmachine: (test-preload-278819) Calling .Close
	I0803 23:56:06.586796   51676 main.go:141] libmachine: (test-preload-278819) Calling .Close
	I0803 23:56:06.586996   51676 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:56:06.587014   51676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:56:06.588152   51676 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:56:06.588167   51676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:56:06.592973   51676 main.go:141] libmachine: Making call to close driver server
	I0803 23:56:06.592988   51676 main.go:141] libmachine: (test-preload-278819) Calling .Close
	I0803 23:56:06.593218   51676 main.go:141] libmachine: (test-preload-278819) DBG | Closing plugin on server side
	I0803 23:56:06.593227   51676 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:56:06.593240   51676 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:56:06.595099   51676 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0803 23:56:06.596351   51676 addons.go:510] duration metric: took 1.217494256s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0803 23:56:07.572860   51676 node_ready.go:53] node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:10.072849   51676 node_ready.go:53] node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:12.573253   51676 node_ready.go:53] node "test-preload-278819" has status "Ready":"False"
	I0803 23:56:13.078253   51676 node_ready.go:49] node "test-preload-278819" has status "Ready":"True"
	I0803 23:56:13.078274   51676 node_ready.go:38] duration metric: took 7.509615488s for node "test-preload-278819" to be "Ready" ...
	I0803 23:56:13.078285   51676 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:56:13.085751   51676 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9lzjx" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:13.092414   51676 pod_ready.go:92] pod "coredns-6d4b75cb6d-9lzjx" in "kube-system" namespace has status "Ready":"True"
	I0803 23:56:13.092434   51676 pod_ready.go:81] duration metric: took 6.66041ms for pod "coredns-6d4b75cb6d-9lzjx" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:13.092443   51676 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:13.098669   51676 pod_ready.go:92] pod "etcd-test-preload-278819" in "kube-system" namespace has status "Ready":"True"
	I0803 23:56:13.098688   51676 pod_ready.go:81] duration metric: took 6.238779ms for pod "etcd-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:13.098696   51676 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:15.104830   51676 pod_ready.go:92] pod "kube-apiserver-test-preload-278819" in "kube-system" namespace has status "Ready":"True"
	I0803 23:56:15.104860   51676 pod_ready.go:81] duration metric: took 2.006157263s for pod "kube-apiserver-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:15.104873   51676 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:15.110280   51676 pod_ready.go:92] pod "kube-controller-manager-test-preload-278819" in "kube-system" namespace has status "Ready":"True"
	I0803 23:56:15.110300   51676 pod_ready.go:81] duration metric: took 5.420563ms for pod "kube-controller-manager-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:15.110315   51676 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lbg62" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:15.114837   51676 pod_ready.go:92] pod "kube-proxy-lbg62" in "kube-system" namespace has status "Ready":"True"
	I0803 23:56:15.114857   51676 pod_ready.go:81] duration metric: took 4.536478ms for pod "kube-proxy-lbg62" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:15.114866   51676 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:15.473158   51676 pod_ready.go:92] pod "kube-scheduler-test-preload-278819" in "kube-system" namespace has status "Ready":"True"
	I0803 23:56:15.473185   51676 pod_ready.go:81] duration metric: took 358.311972ms for pod "kube-scheduler-test-preload-278819" in "kube-system" namespace to be "Ready" ...
	I0803 23:56:15.473195   51676 pod_ready.go:38] duration metric: took 2.39490133s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:56:15.473209   51676 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:56:15.473258   51676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:56:15.488642   51676 api_server.go:72] duration metric: took 10.109808569s to wait for apiserver process to appear ...
	I0803 23:56:15.488672   51676 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:56:15.488689   51676 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0803 23:56:15.493687   51676 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0803 23:56:15.494601   51676 api_server.go:141] control plane version: v1.24.4
	I0803 23:56:15.494620   51676 api_server.go:131] duration metric: took 5.941341ms to wait for apiserver health ...
	I0803 23:56:15.494627   51676 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 23:56:15.675584   51676 system_pods.go:59] 7 kube-system pods found
	I0803 23:56:15.675616   51676 system_pods.go:61] "coredns-6d4b75cb6d-9lzjx" [16050b3a-cc02-4346-b79b-ae1c23ccac85] Running
	I0803 23:56:15.675623   51676 system_pods.go:61] "etcd-test-preload-278819" [260dd4a7-a88f-4ab8-9ded-e5132d0baf47] Running
	I0803 23:56:15.675629   51676 system_pods.go:61] "kube-apiserver-test-preload-278819" [729babeb-bb57-48c3-8f2f-161e2c49d74f] Running
	I0803 23:56:15.675634   51676 system_pods.go:61] "kube-controller-manager-test-preload-278819" [d6518a89-9aa4-4e60-a7d1-d11f3ef7f064] Running
	I0803 23:56:15.675639   51676 system_pods.go:61] "kube-proxy-lbg62" [a592e7c3-d7d5-4938-a49f-7034f6aba338] Running
	I0803 23:56:15.675644   51676 system_pods.go:61] "kube-scheduler-test-preload-278819" [b0d0ffc5-6cbc-45d2-9ccf-89d93e79a77b] Running
	I0803 23:56:15.675649   51676 system_pods.go:61] "storage-provisioner" [44b30521-aa9d-4ead-a77d-e94a940cabfe] Running
	I0803 23:56:15.675655   51676 system_pods.go:74] duration metric: took 181.022601ms to wait for pod list to return data ...
	I0803 23:56:15.675664   51676 default_sa.go:34] waiting for default service account to be created ...
	I0803 23:56:15.872683   51676 default_sa.go:45] found service account: "default"
	I0803 23:56:15.872712   51676 default_sa.go:55] duration metric: took 197.040755ms for default service account to be created ...
	I0803 23:56:15.872722   51676 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 23:56:16.081039   51676 system_pods.go:86] 7 kube-system pods found
	I0803 23:56:16.081070   51676 system_pods.go:89] "coredns-6d4b75cb6d-9lzjx" [16050b3a-cc02-4346-b79b-ae1c23ccac85] Running
	I0803 23:56:16.081078   51676 system_pods.go:89] "etcd-test-preload-278819" [260dd4a7-a88f-4ab8-9ded-e5132d0baf47] Running
	I0803 23:56:16.081083   51676 system_pods.go:89] "kube-apiserver-test-preload-278819" [729babeb-bb57-48c3-8f2f-161e2c49d74f] Running
	I0803 23:56:16.081088   51676 system_pods.go:89] "kube-controller-manager-test-preload-278819" [d6518a89-9aa4-4e60-a7d1-d11f3ef7f064] Running
	I0803 23:56:16.081093   51676 system_pods.go:89] "kube-proxy-lbg62" [a592e7c3-d7d5-4938-a49f-7034f6aba338] Running
	I0803 23:56:16.081098   51676 system_pods.go:89] "kube-scheduler-test-preload-278819" [b0d0ffc5-6cbc-45d2-9ccf-89d93e79a77b] Running
	I0803 23:56:16.081103   51676 system_pods.go:89] "storage-provisioner" [44b30521-aa9d-4ead-a77d-e94a940cabfe] Running
	I0803 23:56:16.081112   51676 system_pods.go:126] duration metric: took 208.384215ms to wait for k8s-apps to be running ...
	I0803 23:56:16.081121   51676 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 23:56:16.081174   51676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:56:16.098196   51676 system_svc.go:56] duration metric: took 17.064638ms WaitForService to wait for kubelet
	I0803 23:56:16.098234   51676 kubeadm.go:582] duration metric: took 10.719407239s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:56:16.098284   51676 node_conditions.go:102] verifying NodePressure condition ...
	I0803 23:56:16.273478   51676 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:56:16.273518   51676 node_conditions.go:123] node cpu capacity is 2
	I0803 23:56:16.273528   51676 node_conditions.go:105] duration metric: took 175.237027ms to run NodePressure ...
	I0803 23:56:16.273542   51676 start.go:241] waiting for startup goroutines ...
	I0803 23:56:16.273552   51676 start.go:246] waiting for cluster config update ...
	I0803 23:56:16.273565   51676 start.go:255] writing updated cluster config ...
	I0803 23:56:16.273909   51676 ssh_runner.go:195] Run: rm -f paused
	I0803 23:56:16.319222   51676 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0803 23:56:16.321412   51676 out.go:177] 
	W0803 23:56:16.322803   51676 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0803 23:56:16.324005   51676 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0803 23:56:16.325199   51676 out.go:177] * Done! kubectl is now configured to use "test-preload-278819" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.255575579Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c282cb9-5b37-4047-904c-9f85f5ea7e67 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.256455782Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93d84652-c4ea-468e-8c91-31f67a528dfe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.256928777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729377256908114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93d84652-c4ea-468e-8c91-31f67a528dfe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.257370449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=702edea4-15e0-4b94-a4ed-9a2d9fabccec name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.257451756Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=702edea4-15e0-4b94-a4ed-9a2d9fabccec name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.257670588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6fafedb858a5113736260bf4adef7161cfdc7986ce2b6e6bfacf902a3b069555,PodSandboxId:4b82924016b1d9c6c3bebe26b144c49f39b02b4b04698a7ad9411c7bfdb89efc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722729371657220426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9lzjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16050b3a-cc02-4346-b79b-ae1c23ccac85,},Annotations:map[string]string{io.kubernetes.container.hash: da2b2fde,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741e850a09a80be9efe9e00be3a03d42cfefd83f9a0a04cd13ed7182c9100b7e,PodSandboxId:dc397cb98dbbff3ee608455129d2a54001ad5643b2fde2ddda1efe50bc5abedb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729364774976403,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 44b30521-aa9d-4ead-a77d-e94a940cabfe,},Annotations:map[string]string{io.kubernetes.container.hash: df03eb22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7000482a4ba031b078893280494ef53afff7f3784343187c03b6be656330ab,PodSandboxId:3f992584ee50dc3be6fa20c132bcd83de19225bc1a5b43ae4fb60073442ae012,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722729364431598564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lbg62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5
92e7c3-d7d5-4938-a49f-7034f6aba338,},Annotations:map[string]string{io.kubernetes.container.hash: ff0ceee2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5421eef35291eb313a6ca0bd0e4506e0e1cc6f798d642842a69e805ffe453e,PodSandboxId:bb6930fa6cfaa920a65026c11eeb907079745b05ae2b29a40080741962248192,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722729357911899973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53c6b365f
7380ee5eaf2920dde06320e,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b02d20db7ea06234c5b15582d2aa7f7e74ec32429df440813d5df0e3418dcb2,PodSandboxId:add1093d6471baeb113f1a145b19644d0c49c86b75028f9716b7d4017034b5f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722729357851411772,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e84890c76dbfc969ce3e80f5c811c53,},Annotations:map
[string]string{io.kubernetes.container.hash: 3f34ecd4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749254fca72aedecc831dcb4e49d28fbd0c406fd59abef2b130dd92a2fc3a495,PodSandboxId:d86cdd0cb01e43c3373e2c9352cdc1f6a57c3e280ef04898c51ddab8ef441ffc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722729357870846235,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24f6aea67e62222e889a89c9e330a22e,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d55ce7b2b84e272f87fa4c17a2cbd0918c21a2f43e716d4b6ccbcd572f6ce4,PodSandboxId:4966bed91aa3bb804f19190af042b732fb95ac088e57902a2b90debd95818fef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722729357825259287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602637972582b774662411f699b834a2,},Annotation
s:map[string]string{io.kubernetes.container.hash: bd90b533,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=702edea4-15e0-4b94-a4ed-9a2d9fabccec name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.297376460Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=956974b5-1aba-4ca8-a7b2-1a7aff8640a4 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.297474512Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=956974b5-1aba-4ca8-a7b2-1a7aff8640a4 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.298958458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=efc2c303-9f4a-4baa-932c-df8e35a2eb22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.299406456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729377299382428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efc2c303-9f4a-4baa-932c-df8e35a2eb22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.299920923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b753ddd-7bb7-4dc0-ac8c-3dc1f367a2a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.300015066Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b753ddd-7bb7-4dc0-ac8c-3dc1f367a2a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.300177235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6fafedb858a5113736260bf4adef7161cfdc7986ce2b6e6bfacf902a3b069555,PodSandboxId:4b82924016b1d9c6c3bebe26b144c49f39b02b4b04698a7ad9411c7bfdb89efc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722729371657220426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9lzjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16050b3a-cc02-4346-b79b-ae1c23ccac85,},Annotations:map[string]string{io.kubernetes.container.hash: da2b2fde,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741e850a09a80be9efe9e00be3a03d42cfefd83f9a0a04cd13ed7182c9100b7e,PodSandboxId:dc397cb98dbbff3ee608455129d2a54001ad5643b2fde2ddda1efe50bc5abedb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729364774976403,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 44b30521-aa9d-4ead-a77d-e94a940cabfe,},Annotations:map[string]string{io.kubernetes.container.hash: df03eb22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7000482a4ba031b078893280494ef53afff7f3784343187c03b6be656330ab,PodSandboxId:3f992584ee50dc3be6fa20c132bcd83de19225bc1a5b43ae4fb60073442ae012,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722729364431598564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lbg62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5
92e7c3-d7d5-4938-a49f-7034f6aba338,},Annotations:map[string]string{io.kubernetes.container.hash: ff0ceee2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5421eef35291eb313a6ca0bd0e4506e0e1cc6f798d642842a69e805ffe453e,PodSandboxId:bb6930fa6cfaa920a65026c11eeb907079745b05ae2b29a40080741962248192,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722729357911899973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53c6b365f
7380ee5eaf2920dde06320e,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b02d20db7ea06234c5b15582d2aa7f7e74ec32429df440813d5df0e3418dcb2,PodSandboxId:add1093d6471baeb113f1a145b19644d0c49c86b75028f9716b7d4017034b5f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722729357851411772,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e84890c76dbfc969ce3e80f5c811c53,},Annotations:map
[string]string{io.kubernetes.container.hash: 3f34ecd4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749254fca72aedecc831dcb4e49d28fbd0c406fd59abef2b130dd92a2fc3a495,PodSandboxId:d86cdd0cb01e43c3373e2c9352cdc1f6a57c3e280ef04898c51ddab8ef441ffc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722729357870846235,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24f6aea67e62222e889a89c9e330a22e,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d55ce7b2b84e272f87fa4c17a2cbd0918c21a2f43e716d4b6ccbcd572f6ce4,PodSandboxId:4966bed91aa3bb804f19190af042b732fb95ac088e57902a2b90debd95818fef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722729357825259287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602637972582b774662411f699b834a2,},Annotation
s:map[string]string{io.kubernetes.container.hash: bd90b533,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b753ddd-7bb7-4dc0-ac8c-3dc1f367a2a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.307896669Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3f434f70-88b3-4590-b295-dbcb5982b348 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.308109473Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4b82924016b1d9c6c3bebe26b144c49f39b02b4b04698a7ad9411c7bfdb89efc,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-9lzjx,Uid:16050b3a-cc02-4346-b79b-ae1c23ccac85,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722729371435168331,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-9lzjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16050b3a-cc02-4346-b79b-ae1c23ccac85,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:56:03.109716693Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc397cb98dbbff3ee608455129d2a54001ad5643b2fde2ddda1efe50bc5abedb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:44b30521-aa9d-4ead-a77d-e94a940cabfe,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722729364619405472,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44b30521-aa9d-4ead-a77d-e94a940cabfe,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-03T23:56:03.109715848Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3f992584ee50dc3be6fa20c132bcd83de19225bc1a5b43ae4fb60073442ae012,Metadata:&PodSandboxMetadata{Name:kube-proxy-lbg62,Uid:a592e7c3-d7d5-4938-a49f-7034f6aba338,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722729364327035847,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lbg62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a592e7c3-d7d5-4938-a49f-7034f6aba338,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:56:03.109713926Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d86cdd0cb01e43c3373e2c9352cdc1f6a57c3e280ef04898c51ddab8ef441ffc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-278819,Ui
d:24f6aea67e62222e889a89c9e330a22e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722729357669870488,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24f6aea67e62222e889a89c9e330a22e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 24f6aea67e62222e889a89c9e330a22e,kubernetes.io/config.seen: 2024-08-03T23:55:57.108640329Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bb6930fa6cfaa920a65026c11eeb907079745b05ae2b29a40080741962248192,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-278819,Uid:53c6b365f7380ee5eaf2920dde06320e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722729357666819178,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-278819,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53c6b365f7380ee5eaf2920dde06320e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 53c6b365f7380ee5eaf2920dde06320e,kubernetes.io/config.seen: 2024-08-03T23:55:57.108641696Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4966bed91aa3bb804f19190af042b732fb95ac088e57902a2b90debd95818fef,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-278819,Uid:602637972582b774662411f699b834a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722729357650259959,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602637972582b774662411f699b834a2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.129:8443,kubernetes.io/config.hash: 602637972582b774662411f699b834a2,kub
ernetes.io/config.seen: 2024-08-03T23:55:57.108609904Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:add1093d6471baeb113f1a145b19644d0c49c86b75028f9716b7d4017034b5f5,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-278819,Uid:9e84890c76dbfc969ce3e80f5c811c53,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722729357644427000,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e84890c76dbfc969ce3e80f5c811c53,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.129:2379,kubernetes.io/config.hash: 9e84890c76dbfc969ce3e80f5c811c53,kubernetes.io/config.seen: 2024-08-03T23:55:57.184704667Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3f434f70-88b3-4590-b295-dbcb5982b348 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.308753121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92358963-138c-44b6-85ee-e7737b6e6211 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.308807065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92358963-138c-44b6-85ee-e7737b6e6211 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.308991316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6fafedb858a5113736260bf4adef7161cfdc7986ce2b6e6bfacf902a3b069555,PodSandboxId:4b82924016b1d9c6c3bebe26b144c49f39b02b4b04698a7ad9411c7bfdb89efc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722729371657220426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9lzjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16050b3a-cc02-4346-b79b-ae1c23ccac85,},Annotations:map[string]string{io.kubernetes.container.hash: da2b2fde,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741e850a09a80be9efe9e00be3a03d42cfefd83f9a0a04cd13ed7182c9100b7e,PodSandboxId:dc397cb98dbbff3ee608455129d2a54001ad5643b2fde2ddda1efe50bc5abedb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729364774976403,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 44b30521-aa9d-4ead-a77d-e94a940cabfe,},Annotations:map[string]string{io.kubernetes.container.hash: df03eb22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7000482a4ba031b078893280494ef53afff7f3784343187c03b6be656330ab,PodSandboxId:3f992584ee50dc3be6fa20c132bcd83de19225bc1a5b43ae4fb60073442ae012,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722729364431598564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lbg62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5
92e7c3-d7d5-4938-a49f-7034f6aba338,},Annotations:map[string]string{io.kubernetes.container.hash: ff0ceee2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5421eef35291eb313a6ca0bd0e4506e0e1cc6f798d642842a69e805ffe453e,PodSandboxId:bb6930fa6cfaa920a65026c11eeb907079745b05ae2b29a40080741962248192,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722729357911899973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53c6b365f
7380ee5eaf2920dde06320e,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b02d20db7ea06234c5b15582d2aa7f7e74ec32429df440813d5df0e3418dcb2,PodSandboxId:add1093d6471baeb113f1a145b19644d0c49c86b75028f9716b7d4017034b5f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722729357851411772,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e84890c76dbfc969ce3e80f5c811c53,},Annotations:map
[string]string{io.kubernetes.container.hash: 3f34ecd4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749254fca72aedecc831dcb4e49d28fbd0c406fd59abef2b130dd92a2fc3a495,PodSandboxId:d86cdd0cb01e43c3373e2c9352cdc1f6a57c3e280ef04898c51ddab8ef441ffc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722729357870846235,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24f6aea67e62222e889a89c9e330a22e,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d55ce7b2b84e272f87fa4c17a2cbd0918c21a2f43e716d4b6ccbcd572f6ce4,PodSandboxId:4966bed91aa3bb804f19190af042b732fb95ac088e57902a2b90debd95818fef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722729357825259287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602637972582b774662411f699b834a2,},Annotation
s:map[string]string{io.kubernetes.container.hash: bd90b533,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92358963-138c-44b6-85ee-e7737b6e6211 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.337469400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38b81389-0c62-4341-8b7c-e94dbdf8d0f6 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.337629683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38b81389-0c62-4341-8b7c-e94dbdf8d0f6 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.338760613Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b96a9f81-23e8-4890-9d91-c8fa21cdd4c0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.339216807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729377339192774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b96a9f81-23e8-4890-9d91-c8fa21cdd4c0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.339935458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=495e0dba-759e-40d1-b010-95bea1b922a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.339987867Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=495e0dba-759e-40d1-b010-95bea1b922a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:17 test-preload-278819 crio[689]: time="2024-08-03 23:56:17.340150208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6fafedb858a5113736260bf4adef7161cfdc7986ce2b6e6bfacf902a3b069555,PodSandboxId:4b82924016b1d9c6c3bebe26b144c49f39b02b4b04698a7ad9411c7bfdb89efc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722729371657220426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9lzjx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16050b3a-cc02-4346-b79b-ae1c23ccac85,},Annotations:map[string]string{io.kubernetes.container.hash: da2b2fde,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741e850a09a80be9efe9e00be3a03d42cfefd83f9a0a04cd13ed7182c9100b7e,PodSandboxId:dc397cb98dbbff3ee608455129d2a54001ad5643b2fde2ddda1efe50bc5abedb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729364774976403,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 44b30521-aa9d-4ead-a77d-e94a940cabfe,},Annotations:map[string]string{io.kubernetes.container.hash: df03eb22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e7000482a4ba031b078893280494ef53afff7f3784343187c03b6be656330ab,PodSandboxId:3f992584ee50dc3be6fa20c132bcd83de19225bc1a5b43ae4fb60073442ae012,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722729364431598564,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lbg62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5
92e7c3-d7d5-4938-a49f-7034f6aba338,},Annotations:map[string]string{io.kubernetes.container.hash: ff0ceee2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5421eef35291eb313a6ca0bd0e4506e0e1cc6f798d642842a69e805ffe453e,PodSandboxId:bb6930fa6cfaa920a65026c11eeb907079745b05ae2b29a40080741962248192,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722729357911899973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53c6b365f
7380ee5eaf2920dde06320e,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b02d20db7ea06234c5b15582d2aa7f7e74ec32429df440813d5df0e3418dcb2,PodSandboxId:add1093d6471baeb113f1a145b19644d0c49c86b75028f9716b7d4017034b5f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722729357851411772,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e84890c76dbfc969ce3e80f5c811c53,},Annotations:map
[string]string{io.kubernetes.container.hash: 3f34ecd4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:749254fca72aedecc831dcb4e49d28fbd0c406fd59abef2b130dd92a2fc3a495,PodSandboxId:d86cdd0cb01e43c3373e2c9352cdc1f6a57c3e280ef04898c51ddab8ef441ffc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722729357870846235,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24f6aea67e62222e889a89c9e330a22e,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d55ce7b2b84e272f87fa4c17a2cbd0918c21a2f43e716d4b6ccbcd572f6ce4,PodSandboxId:4966bed91aa3bb804f19190af042b732fb95ac088e57902a2b90debd95818fef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722729357825259287,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-278819,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602637972582b774662411f699b834a2,},Annotation
s:map[string]string{io.kubernetes.container.hash: bd90b533,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=495e0dba-759e-40d1-b010-95bea1b922a9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6fafedb858a51       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   4b82924016b1d       coredns-6d4b75cb6d-9lzjx
	741e850a09a80       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   dc397cb98dbbf       storage-provisioner
	8e7000482a4ba       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   3f992584ee50d       kube-proxy-lbg62
	fd5421eef3529       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   bb6930fa6cfaa       kube-scheduler-test-preload-278819
	749254fca72ae       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   d86cdd0cb01e4       kube-controller-manager-test-preload-278819
	2b02d20db7ea0       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   add1093d6471b       etcd-test-preload-278819
	c9d55ce7b2b84       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   4966bed91aa3b       kube-apiserver-test-preload-278819
	
	
	==> coredns [6fafedb858a5113736260bf4adef7161cfdc7986ce2b6e6bfacf902a3b069555] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:55766 - 52904 "HINFO IN 603246420657649120.178517220531440056. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.013481146s
	
	
	==> describe nodes <==
	Name:               test-preload-278819
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-278819
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=test-preload-278819
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_54_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:54:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-278819
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:56:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:56:12 +0000   Sat, 03 Aug 2024 23:54:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:56:12 +0000   Sat, 03 Aug 2024 23:54:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:56:12 +0000   Sat, 03 Aug 2024 23:54:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:56:12 +0000   Sat, 03 Aug 2024 23:56:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    test-preload-278819
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a9b73a1d13b4e7796e5147357a3173f
	  System UUID:                3a9b73a1-d13b-4e77-96e5-147357a3173f
	  Boot ID:                    ffecef7c-fbd7-4374-a3ea-db75bfc3950b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9lzjx                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     84s
	  kube-system                 etcd-test-preload-278819                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         97s
	  kube-system                 kube-apiserver-test-preload-278819             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-controller-manager-test-preload-278819    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-proxy-lbg62                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-scheduler-test-preload-278819             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 82s                  kube-proxy       
	  Normal  Starting                 12s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x4 over 105s)  kubelet          Node test-preload-278819 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     105s (x4 over 105s)  kubelet          Node test-preload-278819 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    105s (x4 over 105s)  kubelet          Node test-preload-278819 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s                  kubelet          Node test-preload-278819 status is now: NodeHasSufficientPID
	  Normal  Starting                 97s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  97s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  97s                  kubelet          Node test-preload-278819 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s                  kubelet          Node test-preload-278819 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                87s                  kubelet          Node test-preload-278819 status is now: NodeReady
	  Normal  RegisteredNode           85s                  node-controller  Node test-preload-278819 event: Registered Node test-preload-278819 in Controller
	  Normal  Starting                 20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)    kubelet          Node test-preload-278819 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)    kubelet          Node test-preload-278819 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)    kubelet          Node test-preload-278819 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                   node-controller  Node test-preload-278819 event: Registered Node test-preload-278819 in Controller
	
	
	==> dmesg <==
	[Aug 3 23:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050604] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040223] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.788198] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.622676] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.465140] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.860928] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.060596] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049510] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.157538] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.125769] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.267515] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[ +12.821985] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.060676] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.843833] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	[Aug 3 23:56] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.673629] systemd-fstab-generator[1697]: Ignoring "noauto" option for root device
	[  +6.021963] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [2b02d20db7ea06234c5b15582d2aa7f7e74ec32429df440813d5df0e3418dcb2] <==
	{"level":"info","ts":"2024-08-03T23:55:58.279Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"245a8df1c58de0e1","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-03T23:55:58.281Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-03T23:55:58.282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 switched to configuration voters=(2619562202810409185)"}
	{"level":"info","ts":"2024-08-03T23:55:58.285Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","added-peer-id":"245a8df1c58de0e1","added-peer-peer-urls":["https://192.168.39.129:2380"]}
	{"level":"info","ts":"2024-08-03T23:55:58.285Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:55:58.285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T23:55:58.287Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-03T23:55:58.287Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"245a8df1c58de0e1","initial-advertise-peer-urls":["https://192.168.39.129:2380"],"listen-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-03T23:55:58.287Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-03T23:55:58.287Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2024-08-03T23:55:58.288Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2024-08-03T23:55:59.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-03T23:55:59.914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-03T23:55:59.915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgPreVoteResp from 245a8df1c58de0e1 at term 2"}
	{"level":"info","ts":"2024-08-03T23:55:59.915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became candidate at term 3"}
	{"level":"info","ts":"2024-08-03T23:55:59.915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgVoteResp from 245a8df1c58de0e1 at term 3"}
	{"level":"info","ts":"2024-08-03T23:55:59.915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became leader at term 3"}
	{"level":"info","ts":"2024-08-03T23:55:59.915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 245a8df1c58de0e1 elected leader 245a8df1c58de0e1 at term 3"}
	{"level":"info","ts":"2024-08-03T23:55:59.915Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"245a8df1c58de0e1","local-member-attributes":"{Name:test-preload-278819 ClientURLs:[https://192.168.39.129:2379]}","request-path":"/0/members/245a8df1c58de0e1/attributes","cluster-id":"a2af9788ad7a361f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-03T23:55:59.915Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T23:55:59.916Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T23:55:59.917Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-03T23:55:59.917Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.129:2379"}
	{"level":"info","ts":"2024-08-03T23:55:59.917Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-03T23:55:59.917Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:56:17 up 0 min,  0 users,  load average: 0.51, 0.15, 0.05
	Linux test-preload-278819 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c9d55ce7b2b84e272f87fa4c17a2cbd0918c21a2f43e716d4b6ccbcd572f6ce4] <==
	I0803 23:56:02.347970       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0803 23:56:02.348041       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0803 23:56:02.348117       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0803 23:56:02.361580       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0803 23:56:02.363271       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0803 23:56:02.363298       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0803 23:56:02.433253       1 cache.go:39] Caches are synced for autoregister controller
	E0803 23:56:02.446089       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0803 23:56:02.463602       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0803 23:56:02.510953       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0803 23:56:02.522151       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0803 23:56:02.524411       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0803 23:56:02.524647       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0803 23:56:02.525038       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0803 23:56:02.529547       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0803 23:56:03.008573       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0803 23:56:03.327474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0803 23:56:03.860840       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0803 23:56:03.877060       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0803 23:56:03.917747       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0803 23:56:03.943175       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0803 23:56:03.950129       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0803 23:56:04.755429       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0803 23:56:15.613465       1 controller.go:611] quota admission added evaluator for: endpoints
	I0803 23:56:15.770361       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [749254fca72aedecc831dcb4e49d28fbd0c406fd59abef2b130dd92a2fc3a495] <==
	I0803 23:56:15.519623       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0803 23:56:15.519627       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0803 23:56:15.519982       1 event.go:294] "Event occurred" object="test-preload-278819" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-278819 event: Registered Node test-preload-278819 in Controller"
	I0803 23:56:15.522345       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0803 23:56:15.533437       1 shared_informer.go:262] Caches are synced for node
	I0803 23:56:15.533637       1 range_allocator.go:173] Starting range CIDR allocator
	I0803 23:56:15.533698       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0803 23:56:15.533743       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0803 23:56:15.561979       1 shared_informer.go:262] Caches are synced for PVC protection
	I0803 23:56:15.566550       1 shared_informer.go:262] Caches are synced for HPA
	I0803 23:56:15.567821       1 shared_informer.go:262] Caches are synced for GC
	I0803 23:56:15.583271       1 shared_informer.go:262] Caches are synced for resource quota
	I0803 23:56:15.601449       1 shared_informer.go:262] Caches are synced for endpoint
	I0803 23:56:15.605175       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0803 23:56:15.608612       1 shared_informer.go:262] Caches are synced for deployment
	I0803 23:56:15.617597       1 shared_informer.go:262] Caches are synced for disruption
	I0803 23:56:15.618440       1 disruption.go:371] Sending events to api server.
	I0803 23:56:15.620792       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0803 23:56:15.628404       1 shared_informer.go:262] Caches are synced for resource quota
	I0803 23:56:15.648551       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0803 23:56:15.658353       1 shared_informer.go:262] Caches are synced for attach detach
	I0803 23:56:15.696178       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0803 23:56:16.163223       1 shared_informer.go:262] Caches are synced for garbage collector
	I0803 23:56:16.168804       1 shared_informer.go:262] Caches are synced for garbage collector
	I0803 23:56:16.168843       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [8e7000482a4ba031b078893280494ef53afff7f3784343187c03b6be656330ab] <==
	I0803 23:56:04.676838       1 node.go:163] Successfully retrieved node IP: 192.168.39.129
	I0803 23:56:04.677163       1 server_others.go:138] "Detected node IP" address="192.168.39.129"
	I0803 23:56:04.677267       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0803 23:56:04.744100       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0803 23:56:04.744121       1 server_others.go:206] "Using iptables Proxier"
	I0803 23:56:04.744814       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0803 23:56:04.745891       1 server.go:661] "Version info" version="v1.24.4"
	I0803 23:56:04.745904       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:56:04.747246       1 config.go:317] "Starting service config controller"
	I0803 23:56:04.747270       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0803 23:56:04.747289       1 config.go:226] "Starting endpoint slice config controller"
	I0803 23:56:04.747293       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0803 23:56:04.750399       1 config.go:444] "Starting node config controller"
	I0803 23:56:04.750470       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0803 23:56:04.848050       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0803 23:56:04.848107       1 shared_informer.go:262] Caches are synced for service config
	I0803 23:56:04.850956       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [fd5421eef35291eb313a6ca0bd0e4506e0e1cc6f798d642842a69e805ffe453e] <==
	I0803 23:55:58.909297       1 serving.go:348] Generated self-signed cert in-memory
	W0803 23:56:02.400576       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0803 23:56:02.401472       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0803 23:56:02.401591       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0803 23:56:02.401675       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0803 23:56:02.442711       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0803 23:56:02.442793       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:56:02.453676       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0803 23:56:02.453734       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0803 23:56:02.455695       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0803 23:56:02.455833       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0803 23:56:02.554063       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.245340    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsh2j\" (UniqueName: \"kubernetes.io/projected/a592e7c3-d7d5-4938-a49f-7034f6aba338-kube-api-access-bsh2j\") pod \"kube-proxy-lbg62\" (UID: \"a592e7c3-d7d5-4938-a49f-7034f6aba338\") " pod="kube-system/kube-proxy-lbg62"
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.245419    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fkbd\" (UniqueName: \"kubernetes.io/projected/44b30521-aa9d-4ead-a77d-e94a940cabfe-kube-api-access-6fkbd\") pod \"storage-provisioner\" (UID: \"44b30521-aa9d-4ead-a77d-e94a940cabfe\") " pod="kube-system/storage-provisioner"
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.245474    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/44b30521-aa9d-4ead-a77d-e94a940cabfe-tmp\") pod \"storage-provisioner\" (UID: \"44b30521-aa9d-4ead-a77d-e94a940cabfe\") " pod="kube-system/storage-provisioner"
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.245547    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a592e7c3-d7d5-4938-a49f-7034f6aba338-lib-modules\") pod \"kube-proxy-lbg62\" (UID: \"a592e7c3-d7d5-4938-a49f-7034f6aba338\") " pod="kube-system/kube-proxy-lbg62"
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.245586    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16050b3a-cc02-4346-b79b-ae1c23ccac85-config-volume\") pod \"coredns-6d4b75cb6d-9lzjx\" (UID: \"16050b3a-cc02-4346-b79b-ae1c23ccac85\") " pod="kube-system/coredns-6d4b75cb6d-9lzjx"
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.245710    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwt5l\" (UniqueName: \"kubernetes.io/projected/16050b3a-cc02-4346-b79b-ae1c23ccac85-kube-api-access-dwt5l\") pod \"coredns-6d4b75cb6d-9lzjx\" (UID: \"16050b3a-cc02-4346-b79b-ae1c23ccac85\") " pod="kube-system/coredns-6d4b75cb6d-9lzjx"
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.245797    1086 reconciler.go:159] "Reconciler: start to sync state"
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.608579    1086 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a01084f2-a4e2-4eab-a8a3-0eba7c37e220-config-volume\") pod \"a01084f2-a4e2-4eab-a8a3-0eba7c37e220\" (UID: \"a01084f2-a4e2-4eab-a8a3-0eba7c37e220\") "
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.608641    1086 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lktzs\" (UniqueName: \"kubernetes.io/projected/a01084f2-a4e2-4eab-a8a3-0eba7c37e220-kube-api-access-lktzs\") pod \"a01084f2-a4e2-4eab-a8a3-0eba7c37e220\" (UID: \"a01084f2-a4e2-4eab-a8a3-0eba7c37e220\") "
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: E0803 23:56:03.609351    1086 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: E0803 23:56:03.609444    1086 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/16050b3a-cc02-4346-b79b-ae1c23ccac85-config-volume podName:16050b3a-cc02-4346-b79b-ae1c23ccac85 nodeName:}" failed. No retries permitted until 2024-08-03 23:56:04.109405286 +0000 UTC m=+7.135058983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/16050b3a-cc02-4346-b79b-ae1c23ccac85-config-volume") pod "coredns-6d4b75cb6d-9lzjx" (UID: "16050b3a-cc02-4346-b79b-ae1c23ccac85") : object "kube-system"/"coredns" not registered
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: W0803 23:56:03.610019    1086 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/a01084f2-a4e2-4eab-a8a3-0eba7c37e220/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: W0803 23:56:03.610064    1086 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/a01084f2-a4e2-4eab-a8a3-0eba7c37e220/volumes/kubernetes.io~projected/kube-api-access-lktzs: clearQuota called, but quotas disabled
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.610244    1086 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01084f2-a4e2-4eab-a8a3-0eba7c37e220-kube-api-access-lktzs" (OuterVolumeSpecName: "kube-api-access-lktzs") pod "a01084f2-a4e2-4eab-a8a3-0eba7c37e220" (UID: "a01084f2-a4e2-4eab-a8a3-0eba7c37e220"). InnerVolumeSpecName "kube-api-access-lktzs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.610462    1086 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a01084f2-a4e2-4eab-a8a3-0eba7c37e220-config-volume" (OuterVolumeSpecName: "config-volume") pod "a01084f2-a4e2-4eab-a8a3-0eba7c37e220" (UID: "a01084f2-a4e2-4eab-a8a3-0eba7c37e220"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.709585    1086 reconciler.go:384] "Volume detached for volume \"kube-api-access-lktzs\" (UniqueName: \"kubernetes.io/projected/a01084f2-a4e2-4eab-a8a3-0eba7c37e220-kube-api-access-lktzs\") on node \"test-preload-278819\" DevicePath \"\""
	Aug 03 23:56:03 test-preload-278819 kubelet[1086]: I0803 23:56:03.709633    1086 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a01084f2-a4e2-4eab-a8a3-0eba7c37e220-config-volume\") on node \"test-preload-278819\" DevicePath \"\""
	Aug 03 23:56:04 test-preload-278819 kubelet[1086]: E0803 23:56:04.115768    1086 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 03 23:56:04 test-preload-278819 kubelet[1086]: E0803 23:56:04.115858    1086 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/16050b3a-cc02-4346-b79b-ae1c23ccac85-config-volume podName:16050b3a-cc02-4346-b79b-ae1c23ccac85 nodeName:}" failed. No retries permitted until 2024-08-03 23:56:05.115843515 +0000 UTC m=+8.141497196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/16050b3a-cc02-4346-b79b-ae1c23ccac85-config-volume") pod "coredns-6d4b75cb6d-9lzjx" (UID: "16050b3a-cc02-4346-b79b-ae1c23ccac85") : object "kube-system"/"coredns" not registered
	Aug 03 23:56:05 test-preload-278819 kubelet[1086]: E0803 23:56:05.124385    1086 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 03 23:56:05 test-preload-278819 kubelet[1086]: E0803 23:56:05.124442    1086 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/16050b3a-cc02-4346-b79b-ae1c23ccac85-config-volume podName:16050b3a-cc02-4346-b79b-ae1c23ccac85 nodeName:}" failed. No retries permitted until 2024-08-03 23:56:07.124428417 +0000 UTC m=+10.150082097 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/16050b3a-cc02-4346-b79b-ae1c23ccac85-config-volume") pod "coredns-6d4b75cb6d-9lzjx" (UID: "16050b3a-cc02-4346-b79b-ae1c23ccac85") : object "kube-system"/"coredns" not registered
	Aug 03 23:56:05 test-preload-278819 kubelet[1086]: E0803 23:56:05.225684    1086 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9lzjx" podUID=16050b3a-cc02-4346-b79b-ae1c23ccac85
	Aug 03 23:56:05 test-preload-278819 kubelet[1086]: I0803 23:56:05.230910    1086 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a01084f2-a4e2-4eab-a8a3-0eba7c37e220 path="/var/lib/kubelet/pods/a01084f2-a4e2-4eab-a8a3-0eba7c37e220/volumes"
	Aug 03 23:56:07 test-preload-278819 kubelet[1086]: E0803 23:56:07.146908    1086 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 03 23:56:07 test-preload-278819 kubelet[1086]: E0803 23:56:07.147064    1086 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/16050b3a-cc02-4346-b79b-ae1c23ccac85-config-volume podName:16050b3a-cc02-4346-b79b-ae1c23ccac85 nodeName:}" failed. No retries permitted until 2024-08-03 23:56:11.147041398 +0000 UTC m=+14.172695094 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/16050b3a-cc02-4346-b79b-ae1c23ccac85-config-volume") pod "coredns-6d4b75cb6d-9lzjx" (UID: "16050b3a-cc02-4346-b79b-ae1c23ccac85") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [741e850a09a80be9efe9e00be3a03d42cfefd83f9a0a04cd13ed7182c9100b7e] <==
	I0803 23:56:04.856213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-278819 -n test-preload-278819
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-278819 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-278819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-278819
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-278819: (1.109760484s)
--- FAIL: TestPreload (244.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (444.94s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302198 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0803 23:58:27.616486   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-302198 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m17.867723246s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-302198] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Downloading driver docker-machine-driver-kvm2:
	* Starting "kubernetes-upgrade-302198" primary control-plane node in "kubernetes-upgrade-302198" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:58:22.578358   53438 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:58:22.578503   53438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:58:22.578514   53438 out.go:304] Setting ErrFile to fd 2...
	I0803 23:58:22.578520   53438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:58:22.578693   53438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:58:22.579261   53438 out.go:298] Setting JSON to false
	I0803 23:58:22.580116   53438 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6047,"bootTime":1722723456,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:58:22.580175   53438 start.go:139] virtualization: kvm guest
	I0803 23:58:22.582259   53438 out.go:177] * [kubernetes-upgrade-302198] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:58:22.583600   53438 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:58:22.583615   53438 notify.go:220] Checking for updates...
	I0803 23:58:22.586234   53438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:58:22.587585   53438 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:58:22.588864   53438 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:58:22.590265   53438 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:58:22.591631   53438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:58:22.593522   53438 config.go:182] Loaded profile config "offline-crio-855826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:58:22.593658   53438 config.go:182] Loaded profile config "pause-908631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:58:22.593776   53438 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:58:22.632422   53438 out.go:177] * Using the kvm2 driver based on user configuration
	I0803 23:58:22.633643   53438 start.go:297] selected driver: kvm2
	I0803 23:58:22.633660   53438 start.go:901] validating driver "kvm2" against <nil>
	I0803 23:58:22.633675   53438 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:58:22.634406   53438 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:58:22.634480   53438 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:58:22.651608   53438 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.26.0
	W0803 23:58:22.651639   53438 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.26.0, want 1.33.1
	I0803 23:58:22.653211   53438 out.go:177] * Downloading driver docker-machine-driver-kvm2:
	I0803 23:58:22.654343   53438 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.33.1/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.33.1/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0803 23:58:24.909552   53438 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 23:58:24.909812   53438 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 23:58:24.909887   53438 cni.go:84] Creating CNI manager for ""
	I0803 23:58:24.909904   53438 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 23:58:24.909915   53438 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 23:58:24.909998   53438 start.go:340] cluster config:
	{Name:kubernetes-upgrade-302198 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-302198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:58:24.910109   53438 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:58:24.918448   53438 out.go:177] * Starting "kubernetes-upgrade-302198" primary control-plane node in "kubernetes-upgrade-302198" cluster
	I0803 23:58:24.919874   53438 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0803 23:58:24.919934   53438 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0803 23:58:24.919956   53438 cache.go:56] Caching tarball of preloaded images
	I0803 23:58:24.920051   53438 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:58:24.920066   53438 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0803 23:58:24.920172   53438 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/config.json ...
	I0803 23:58:24.920198   53438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/config.json: {Name:mkc77d362025e5a31b6fdd55945ed6de42b7cf53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:58:24.920355   53438 start.go:360] acquireMachinesLock for kubernetes-upgrade-302198: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:59:10.134501   53438 start.go:364] duration metric: took 45.214122477s to acquireMachinesLock for "kubernetes-upgrade-302198"
	I0803 23:59:10.134581   53438 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-302198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-302198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:59:10.134711   53438 start.go:125] createHost starting for "" (driver="kvm2")
	I0803 23:59:10.137055   53438 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:59:10.137257   53438 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0803 23:59:10.137314   53438 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:59:10.154463   53438 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36315
	I0803 23:59:10.154923   53438 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:59:10.155439   53438 main.go:141] libmachine: Using API Version  1
	I0803 23:59:10.155457   53438 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:59:10.155831   53438 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:59:10.156026   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetMachineName
	I0803 23:59:10.156202   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .DriverName
	I0803 23:59:10.156343   53438 start.go:159] libmachine.API.Create for "kubernetes-upgrade-302198" (driver="kvm2")
	I0803 23:59:10.156385   53438 client.go:168] LocalClient.Create starting
	I0803 23:59:10.156422   53438 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0803 23:59:10.156464   53438 main.go:141] libmachine: Decoding PEM data...
	I0803 23:59:10.156484   53438 main.go:141] libmachine: Parsing certificate...
	I0803 23:59:10.156558   53438 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0803 23:59:10.156585   53438 main.go:141] libmachine: Decoding PEM data...
	I0803 23:59:10.156600   53438 main.go:141] libmachine: Parsing certificate...
	I0803 23:59:10.156625   53438 main.go:141] libmachine: Running pre-create checks...
	I0803 23:59:10.156638   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .PreCreateCheck
	I0803 23:59:10.156978   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetConfigRaw
	I0803 23:59:10.157436   53438 main.go:141] libmachine: Creating machine...
	I0803 23:59:10.157454   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .Create
	I0803 23:59:10.157611   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Creating KVM machine...
	I0803 23:59:10.158992   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found existing default KVM network
	I0803 23:59:10.159985   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:10.159809   53826 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:bc:da:b6} reservation:<nil>}
	I0803 23:59:10.160705   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:10.160598   53826 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:09:4d:0d} reservation:<nil>}
	I0803 23:59:10.161786   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:10.161680   53826 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ae610}
	I0803 23:59:10.161813   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | created network xml: 
	I0803 23:59:10.161855   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | <network>
	I0803 23:59:10.161877   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG |   <name>mk-kubernetes-upgrade-302198</name>
	I0803 23:59:10.161895   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG |   <dns enable='no'/>
	I0803 23:59:10.161905   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG |   
	I0803 23:59:10.161919   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0803 23:59:10.161930   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG |     <dhcp>
	I0803 23:59:10.161944   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0803 23:59:10.161955   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG |     </dhcp>
	I0803 23:59:10.161965   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG |   </ip>
	I0803 23:59:10.161974   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG |   
	I0803 23:59:10.161983   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | </network>
	I0803 23:59:10.161993   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | 
	I0803 23:59:10.167743   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | trying to create private KVM network mk-kubernetes-upgrade-302198 192.168.61.0/24...
	I0803 23:59:10.243337   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | private KVM network mk-kubernetes-upgrade-302198 192.168.61.0/24 created
	I0803 23:59:10.243373   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198 ...
	I0803 23:59:10.243387   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:10.243304   53826 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:59:10.243399   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:59:10.243437   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:59:10.481478   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:10.481323   53826 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198/id_rsa...
	I0803 23:59:10.620849   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:10.620677   53826 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198/kubernetes-upgrade-302198.rawdisk...
	I0803 23:59:10.620887   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Writing magic tar header
	I0803 23:59:10.620906   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Writing SSH key tar header
	I0803 23:59:10.620921   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:10.620817   53826 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198 ...
	I0803 23:59:10.620939   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198
	I0803 23:59:10.620955   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0803 23:59:10.620973   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198 (perms=drwx------)
	I0803 23:59:10.620998   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:59:10.621014   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:59:10.621028   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0803 23:59:10.621041   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:59:10.621052   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:59:10.621065   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Checking permissions on dir: /home
	I0803 23:59:10.621086   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0803 23:59:10.621123   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0803 23:59:10.621138   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Skipping /home - not owner
	I0803 23:59:10.621158   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:59:10.621171   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:59:10.621270   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Creating domain...
	I0803 23:59:10.622215   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) define libvirt domain using xml: 
	I0803 23:59:10.622238   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) <domain type='kvm'>
	I0803 23:59:10.622249   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)   <name>kubernetes-upgrade-302198</name>
	I0803 23:59:10.622258   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)   <memory unit='MiB'>2200</memory>
	I0803 23:59:10.622271   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)   <vcpu>2</vcpu>
	I0803 23:59:10.622281   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)   <features>
	I0803 23:59:10.622289   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <acpi/>
	I0803 23:59:10.622308   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <apic/>
	I0803 23:59:10.622354   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <pae/>
	I0803 23:59:10.622378   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     
	I0803 23:59:10.622389   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)   </features>
	I0803 23:59:10.622402   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)   <cpu mode='host-passthrough'>
	I0803 23:59:10.622410   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)   
	I0803 23:59:10.622422   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)   </cpu>
	I0803 23:59:10.622436   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)   <os>
	I0803 23:59:10.622447   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <type>hvm</type>
	I0803 23:59:10.622466   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <boot dev='cdrom'/>
	I0803 23:59:10.622477   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <boot dev='hd'/>
	I0803 23:59:10.622491   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <bootmenu enable='no'/>
	I0803 23:59:10.622501   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)   </os>
	I0803 23:59:10.622531   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)   <devices>
	I0803 23:59:10.622555   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <disk type='file' device='cdrom'>
	I0803 23:59:10.622588   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198/boot2docker.iso'/>
	I0803 23:59:10.622602   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <target dev='hdc' bus='scsi'/>
	I0803 23:59:10.622612   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <readonly/>
	I0803 23:59:10.622621   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     </disk>
	I0803 23:59:10.622632   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <disk type='file' device='disk'>
	I0803 23:59:10.622644   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:59:10.622660   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198/kubernetes-upgrade-302198.rawdisk'/>
	I0803 23:59:10.622676   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <target dev='hda' bus='virtio'/>
	I0803 23:59:10.622688   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     </disk>
	I0803 23:59:10.622700   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <interface type='network'>
	I0803 23:59:10.622714   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <source network='mk-kubernetes-upgrade-302198'/>
	I0803 23:59:10.622735   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <model type='virtio'/>
	I0803 23:59:10.622745   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     </interface>
	I0803 23:59:10.622759   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <interface type='network'>
	I0803 23:59:10.622772   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <source network='default'/>
	I0803 23:59:10.622782   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <model type='virtio'/>
	I0803 23:59:10.622790   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     </interface>
	I0803 23:59:10.622800   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <serial type='pty'>
	I0803 23:59:10.622808   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <target port='0'/>
	I0803 23:59:10.622818   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     </serial>
	I0803 23:59:10.622833   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <console type='pty'>
	I0803 23:59:10.622847   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <target type='serial' port='0'/>
	I0803 23:59:10.622857   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     </console>
	I0803 23:59:10.622868   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     <rng model='virtio'>
	I0803 23:59:10.622879   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)       <backend model='random'>/dev/random</backend>
	I0803 23:59:10.622895   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     </rng>
	I0803 23:59:10.622905   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     
	I0803 23:59:10.622922   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)     
	I0803 23:59:10.622942   53438 main.go:141] libmachine: (kubernetes-upgrade-302198)   </devices>
	I0803 23:59:10.622952   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) </domain>
	I0803 23:59:10.622963   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) 
	I0803 23:59:10.630031   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:81:71:58 in network default
	I0803 23:59:10.630657   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Ensuring networks are active...
	I0803 23:59:10.630687   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:10.631534   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Ensuring network default is active
	I0803 23:59:10.631936   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Ensuring network mk-kubernetes-upgrade-302198 is active
	I0803 23:59:10.632536   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Getting domain xml...
	I0803 23:59:10.633439   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Creating domain...
	I0803 23:59:11.974001   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Waiting to get IP...
	I0803 23:59:11.974940   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:11.975384   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:11.975429   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:11.975366   53826 retry.go:31] will retry after 209.093725ms: waiting for machine to come up
	I0803 23:59:12.185960   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:12.186434   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:12.186462   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:12.186385   53826 retry.go:31] will retry after 267.325924ms: waiting for machine to come up
	I0803 23:59:12.456059   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:12.456567   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:12.456596   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:12.456537   53826 retry.go:31] will retry after 466.18653ms: waiting for machine to come up
	I0803 23:59:12.924016   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:12.924546   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:12.924575   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:12.924515   53826 retry.go:31] will retry after 464.591003ms: waiting for machine to come up
	I0803 23:59:13.391281   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:13.391899   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:13.391929   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:13.391852   53826 retry.go:31] will retry after 507.137318ms: waiting for machine to come up
	I0803 23:59:13.900524   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:13.900949   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:13.900971   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:13.900906   53826 retry.go:31] will retry after 766.287846ms: waiting for machine to come up
	I0803 23:59:14.668892   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:14.669285   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:14.669313   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:14.669202   53826 retry.go:31] will retry after 736.276385ms: waiting for machine to come up
	I0803 23:59:15.407585   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:15.408057   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:15.408088   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:15.407996   53826 retry.go:31] will retry after 1.191638871s: waiting for machine to come up
	I0803 23:59:16.601330   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:16.601887   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:16.601910   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:16.601839   53826 retry.go:31] will retry after 1.129557322s: waiting for machine to come up
	I0803 23:59:17.733197   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:17.733679   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:17.733702   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:17.733645   53826 retry.go:31] will retry after 2.298041469s: waiting for machine to come up
	I0803 23:59:20.035057   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:20.035529   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:20.035565   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:20.035493   53826 retry.go:31] will retry after 2.352967235s: waiting for machine to come up
	I0803 23:59:22.390868   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:22.391367   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:22.391405   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:22.391320   53826 retry.go:31] will retry after 2.447543528s: waiting for machine to come up
	I0803 23:59:24.840657   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:24.841090   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:24.841114   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:24.841044   53826 retry.go:31] will retry after 3.883096809s: waiting for machine to come up
	I0803 23:59:28.726416   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:28.726788   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find current IP address of domain kubernetes-upgrade-302198 in network mk-kubernetes-upgrade-302198
	I0803 23:59:28.726819   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | I0803 23:59:28.726739   53826 retry.go:31] will retry after 4.896633807s: waiting for machine to come up
	I0803 23:59:33.626909   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:33.627356   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Found IP for machine: 192.168.61.45
	I0803 23:59:33.627374   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Reserving static IP address...
	I0803 23:59:33.627390   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has current primary IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:33.627743   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-302198", mac: "52:54:00:cb:2d:47", ip: "192.168.61.45"} in network mk-kubernetes-upgrade-302198
	I0803 23:59:33.705068   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Getting to WaitForSSH function...
	I0803 23:59:33.705099   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Reserved static IP address: 192.168.61.45
	I0803 23:59:33.705139   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Waiting for SSH to be available...
	I0803 23:59:33.707708   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:33.708292   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:33.708320   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:33.708510   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Using SSH client type: external
	I0803 23:59:33.708533   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198/id_rsa (-rw-------)
	I0803 23:59:33.708567   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.45 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:59:33.708578   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | About to run SSH command:
	I0803 23:59:33.708590   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | exit 0
	I0803 23:59:33.837522   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | SSH cmd err, output: <nil>: 
	I0803 23:59:33.837852   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) KVM machine creation complete!
	I0803 23:59:33.838199   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetConfigRaw
	I0803 23:59:33.838719   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .DriverName
	I0803 23:59:33.838925   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .DriverName
	I0803 23:59:33.839122   53438 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:59:33.839138   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetState
	I0803 23:59:33.840511   53438 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:59:33.840528   53438 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:59:33.840535   53438 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:59:33.840555   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHHostname
	I0803 23:59:33.843350   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:33.843705   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:33.843730   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:33.843923   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHPort
	I0803 23:59:33.844118   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:33.844290   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:33.844443   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHUsername
	I0803 23:59:33.844613   53438 main.go:141] libmachine: Using SSH client type: native
	I0803 23:59:33.844802   53438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.45 22 <nil> <nil>}
	I0803 23:59:33.844820   53438 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:59:33.940815   53438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:59:33.940845   53438 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:59:33.940856   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHHostname
	I0803 23:59:33.943843   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:33.944225   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:33.944258   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:33.944453   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHPort
	I0803 23:59:33.944684   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:33.944872   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:33.945029   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHUsername
	I0803 23:59:33.945217   53438 main.go:141] libmachine: Using SSH client type: native
	I0803 23:59:33.945415   53438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.45 22 <nil> <nil>}
	I0803 23:59:33.945429   53438 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:59:34.046482   53438 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:59:34.046551   53438 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:59:34.046565   53438 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:59:34.046579   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetMachineName
	I0803 23:59:34.046819   53438 buildroot.go:166] provisioning hostname "kubernetes-upgrade-302198"
	I0803 23:59:34.046845   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetMachineName
	I0803 23:59:34.047089   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHHostname
	I0803 23:59:34.049989   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.050462   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:34.050489   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.050650   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHPort
	I0803 23:59:34.050835   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:34.051098   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:34.051267   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHUsername
	I0803 23:59:34.051432   53438 main.go:141] libmachine: Using SSH client type: native
	I0803 23:59:34.051574   53438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.45 22 <nil> <nil>}
	I0803 23:59:34.051586   53438 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-302198 && echo "kubernetes-upgrade-302198" | sudo tee /etc/hostname
	I0803 23:59:34.164675   53438 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-302198
	
	I0803 23:59:34.164713   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHHostname
	I0803 23:59:34.167518   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.167880   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:34.167916   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.168117   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHPort
	I0803 23:59:34.168296   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:34.168435   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:34.168532   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHUsername
	I0803 23:59:34.168689   53438 main.go:141] libmachine: Using SSH client type: native
	I0803 23:59:34.168871   53438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.45 22 <nil> <nil>}
	I0803 23:59:34.168887   53438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-302198' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-302198/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-302198' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:59:34.283505   53438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:59:34.283542   53438 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0803 23:59:34.283567   53438 buildroot.go:174] setting up certificates
	I0803 23:59:34.283581   53438 provision.go:84] configureAuth start
	I0803 23:59:34.283592   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetMachineName
	I0803 23:59:34.283921   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetIP
	I0803 23:59:34.286433   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.286806   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:34.286840   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.286956   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHHostname
	I0803 23:59:34.289064   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.289406   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:34.289432   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.289719   53438 provision.go:143] copyHostCerts
	I0803 23:59:34.289791   53438 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0803 23:59:34.289802   53438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0803 23:59:34.289869   53438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0803 23:59:34.290000   53438 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0803 23:59:34.290012   53438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0803 23:59:34.290043   53438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0803 23:59:34.290118   53438 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0803 23:59:34.290128   53438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0803 23:59:34.290158   53438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0803 23:59:34.290222   53438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-302198 san=[127.0.0.1 192.168.61.45 kubernetes-upgrade-302198 localhost minikube]
	I0803 23:59:34.483799   53438 provision.go:177] copyRemoteCerts
	I0803 23:59:34.483860   53438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:59:34.483885   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHHostname
	I0803 23:59:34.486586   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.486957   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:34.486990   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.487183   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHPort
	I0803 23:59:34.487375   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:34.487542   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHUsername
	I0803 23:59:34.487679   53438 sshutil.go:53] new ssh client: &{IP:192.168.61.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198/id_rsa Username:docker}
	I0803 23:59:34.569530   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0803 23:59:34.596060   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0803 23:59:34.621646   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0803 23:59:34.646012   53438 provision.go:87] duration metric: took 362.417598ms to configureAuth
	I0803 23:59:34.646045   53438 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:59:34.646230   53438 config.go:182] Loaded profile config "kubernetes-upgrade-302198": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0803 23:59:34.646297   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHHostname
	I0803 23:59:34.649082   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.649409   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:34.649444   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.649604   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHPort
	I0803 23:59:34.649799   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:34.650022   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:34.650197   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHUsername
	I0803 23:59:34.650402   53438 main.go:141] libmachine: Using SSH client type: native
	I0803 23:59:34.650589   53438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.45 22 <nil> <nil>}
	I0803 23:59:34.650609   53438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:59:34.922166   53438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:59:34.922201   53438 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:59:34.922212   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetURL
	I0803 23:59:34.923587   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | Using libvirt version 6000000
	I0803 23:59:34.925985   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.926421   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:34.926452   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.926616   53438 main.go:141] libmachine: Docker is up and running!
	I0803 23:59:34.926635   53438 main.go:141] libmachine: Reticulating splines...
	I0803 23:59:34.926641   53438 client.go:171] duration metric: took 24.770246754s to LocalClient.Create
	I0803 23:59:34.926665   53438 start.go:167] duration metric: took 24.770321553s to libmachine.API.Create "kubernetes-upgrade-302198"
	I0803 23:59:34.926678   53438 start.go:293] postStartSetup for "kubernetes-upgrade-302198" (driver="kvm2")
	I0803 23:59:34.926694   53438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:59:34.926731   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .DriverName
	I0803 23:59:34.926996   53438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:59:34.927021   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHHostname
	I0803 23:59:34.929167   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.929586   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:34.929631   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:34.929696   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHPort
	I0803 23:59:34.929924   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:34.930054   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHUsername
	I0803 23:59:34.930160   53438 sshutil.go:53] new ssh client: &{IP:192.168.61.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198/id_rsa Username:docker}
	I0803 23:59:35.008533   53438 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:59:35.012981   53438 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:59:35.013003   53438 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0803 23:59:35.013077   53438 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0803 23:59:35.013151   53438 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0803 23:59:35.013246   53438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:59:35.023803   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:59:35.048498   53438 start.go:296] duration metric: took 121.802178ms for postStartSetup
	I0803 23:59:35.048548   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetConfigRaw
	I0803 23:59:35.049125   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetIP
	I0803 23:59:35.052463   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:35.052849   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:35.052868   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:35.053215   53438 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/config.json ...
	I0803 23:59:35.053423   53438 start.go:128] duration metric: took 24.918686342s to createHost
	I0803 23:59:35.053448   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHHostname
	I0803 23:59:35.055844   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:35.056212   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:35.056245   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:35.056451   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHPort
	I0803 23:59:35.056683   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:35.056821   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:35.056997   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHUsername
	I0803 23:59:35.057150   53438 main.go:141] libmachine: Using SSH client type: native
	I0803 23:59:35.057339   53438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.45 22 <nil> <nil>}
	I0803 23:59:35.057371   53438 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0803 23:59:35.154080   53438 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722729575.104499913
	
	I0803 23:59:35.154105   53438 fix.go:216] guest clock: 1722729575.104499913
	I0803 23:59:35.154115   53438 fix.go:229] Guest: 2024-08-03 23:59:35.104499913 +0000 UTC Remote: 2024-08-03 23:59:35.053435935 +0000 UTC m=+72.511428681 (delta=51.063978ms)
	I0803 23:59:35.154148   53438 fix.go:200] guest clock delta is within tolerance: 51.063978ms
	I0803 23:59:35.154158   53438 start.go:83] releasing machines lock for "kubernetes-upgrade-302198", held for 25.019621676s
	I0803 23:59:35.154186   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .DriverName
	I0803 23:59:35.154457   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetIP
	I0803 23:59:35.157396   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:35.157877   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:35.157910   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:35.158039   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .DriverName
	I0803 23:59:35.158576   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .DriverName
	I0803 23:59:35.158765   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .DriverName
	I0803 23:59:35.158843   53438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:59:35.158883   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHHostname
	I0803 23:59:35.158996   53438 ssh_runner.go:195] Run: cat /version.json
	I0803 23:59:35.159021   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHHostname
	I0803 23:59:35.162184   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:35.162499   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:35.162714   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:35.162739   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:35.162913   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHPort
	I0803 23:59:35.162995   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:35.163029   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:35.163189   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHPort
	I0803 23:59:35.163195   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:35.163346   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHKeyPath
	I0803 23:59:35.163354   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHUsername
	I0803 23:59:35.163534   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetSSHUsername
	I0803 23:59:35.163558   53438 sshutil.go:53] new ssh client: &{IP:192.168.61.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198/id_rsa Username:docker}
	I0803 23:59:35.163686   53438 sshutil.go:53] new ssh client: &{IP:192.168.61.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kubernetes-upgrade-302198/id_rsa Username:docker}
	I0803 23:59:35.266012   53438 ssh_runner.go:195] Run: systemctl --version
	I0803 23:59:35.281498   53438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:59:35.452159   53438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:59:35.459357   53438 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:59:35.459440   53438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:59:35.475855   53438 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:59:35.475896   53438 start.go:495] detecting cgroup driver to use...
	I0803 23:59:35.475978   53438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:59:35.493072   53438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:59:35.508056   53438 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:59:35.508117   53438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:59:35.523923   53438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:59:35.538032   53438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:59:35.667480   53438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:59:35.835974   53438 docker.go:233] disabling docker service ...
	I0803 23:59:35.836053   53438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:59:35.851834   53438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:59:35.866171   53438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:59:36.006873   53438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:59:36.142053   53438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:59:36.159964   53438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:59:36.185556   53438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0803 23:59:36.185627   53438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:36.201180   53438 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:59:36.201246   53438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:36.219881   53438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:36.235272   53438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:36.247889   53438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:59:36.260607   53438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:59:36.271663   53438 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:59:36.271739   53438 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:59:36.287508   53438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:59:36.299259   53438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:59:36.453285   53438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:59:36.604186   53438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:59:36.604269   53438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:59:36.609601   53438 start.go:563] Will wait 60s for crictl version
	I0803 23:59:36.609664   53438 ssh_runner.go:195] Run: which crictl
	I0803 23:59:36.614004   53438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:59:36.665118   53438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:59:36.665221   53438 ssh_runner.go:195] Run: crio --version
	I0803 23:59:36.696708   53438 ssh_runner.go:195] Run: crio --version
	I0803 23:59:36.735506   53438 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0803 23:59:36.736918   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) Calling .GetIP
	I0803 23:59:36.740150   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:36.740735   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:2d:47", ip: ""} in network mk-kubernetes-upgrade-302198: {Iface:virbr3 ExpiryTime:2024-08-04 00:59:25 +0000 UTC Type:0 Mac:52:54:00:cb:2d:47 Iaid: IPaddr:192.168.61.45 Prefix:24 Hostname:kubernetes-upgrade-302198 Clientid:01:52:54:00:cb:2d:47}
	I0803 23:59:36.740788   53438 main.go:141] libmachine: (kubernetes-upgrade-302198) DBG | domain kubernetes-upgrade-302198 has defined IP address 192.168.61.45 and MAC address 52:54:00:cb:2d:47 in network mk-kubernetes-upgrade-302198
	I0803 23:59:36.741028   53438 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0803 23:59:36.745791   53438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:59:36.759291   53438 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-302198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-302198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.45 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:59:36.759406   53438 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0803 23:59:36.759450   53438 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:59:36.804312   53438 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0803 23:59:36.804390   53438 ssh_runner.go:195] Run: which lz4
	I0803 23:59:36.809319   53438 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0803 23:59:36.813867   53438 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 23:59:36.813897   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0803 23:59:38.574046   53438 crio.go:462] duration metric: took 1.764761777s to copy over tarball
	I0803 23:59:38.574158   53438 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 23:59:41.443696   53438 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.86950096s)
	I0803 23:59:41.443734   53438 crio.go:469] duration metric: took 2.869636774s to extract the tarball
	I0803 23:59:41.443743   53438 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 23:59:41.488533   53438 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:59:41.549737   53438 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0803 23:59:41.549764   53438 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0803 23:59:41.549814   53438 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:59:41.549874   53438 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0803 23:59:41.549892   53438 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0803 23:59:41.549900   53438 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0803 23:59:41.549955   53438 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0803 23:59:41.549941   53438 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0803 23:59:41.550115   53438 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0803 23:59:41.550272   53438 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0803 23:59:41.551753   53438 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:59:41.551782   53438 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0803 23:59:41.551791   53438 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0803 23:59:41.551752   53438 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0803 23:59:41.551865   53438 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0803 23:59:41.551753   53438 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0803 23:59:41.552140   53438 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0803 23:59:41.552257   53438 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0803 23:59:41.718550   53438 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0803 23:59:41.768095   53438 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0803 23:59:41.768150   53438 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0803 23:59:41.768214   53438 ssh_runner.go:195] Run: which crictl
	I0803 23:59:41.772487   53438 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0803 23:59:41.811145   53438 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0803 23:59:41.816642   53438 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0803 23:59:41.861062   53438 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0803 23:59:41.861100   53438 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0803 23:59:41.861145   53438 ssh_runner.go:195] Run: which crictl
	I0803 23:59:41.865833   53438 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0803 23:59:41.907871   53438 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0803 23:59:41.911063   53438 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0803 23:59:41.911094   53438 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0803 23:59:41.914415   53438 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0803 23:59:41.916422   53438 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0803 23:59:41.926899   53438 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0803 23:59:42.056096   53438 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0803 23:59:42.056142   53438 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0803 23:59:42.056161   53438 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0803 23:59:42.056176   53438 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0803 23:59:42.056215   53438 ssh_runner.go:195] Run: which crictl
	I0803 23:59:42.056215   53438 ssh_runner.go:195] Run: which crictl
	I0803 23:59:42.058954   53438 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0803 23:59:42.058993   53438 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0803 23:59:42.059019   53438 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0803 23:59:42.059037   53438 ssh_runner.go:195] Run: which crictl
	I0803 23:59:42.059052   53438 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0803 23:59:42.059090   53438 ssh_runner.go:195] Run: which crictl
	I0803 23:59:42.059137   53438 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0803 23:59:42.059154   53438 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0803 23:59:42.059174   53438 ssh_runner.go:195] Run: which crictl
	I0803 23:59:42.063687   53438 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0803 23:59:42.063942   53438 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0803 23:59:42.074045   53438 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0803 23:59:42.076194   53438 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0803 23:59:42.076255   53438 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0803 23:59:42.197988   53438 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0803 23:59:42.198003   53438 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0803 23:59:42.209603   53438 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0803 23:59:42.219250   53438 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0803 23:59:42.219458   53438 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0803 23:59:42.404488   53438 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:59:42.552977   53438 cache_images.go:92] duration metric: took 1.003190329s to LoadCachedImages
	W0803 23:59:42.553093   53438 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0803 23:59:42.553115   53438 kubeadm.go:934] updating node { 192.168.61.45 8443 v1.20.0 crio true true} ...
	I0803 23:59:42.553247   53438 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-302198 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-302198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:59:42.553330   53438 ssh_runner.go:195] Run: crio config
	I0803 23:59:42.610468   53438 cni.go:84] Creating CNI manager for ""
	I0803 23:59:42.610499   53438 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 23:59:42.610515   53438 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:59:42.610540   53438 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.45 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-302198 NodeName:kubernetes-upgrade-302198 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0803 23:59:42.610701   53438 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-302198"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.45
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.45"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:59:42.610784   53438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0803 23:59:42.621217   53438 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:59:42.621295   53438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 23:59:42.630858   53438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0803 23:59:42.652309   53438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:59:42.670889   53438 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0803 23:59:42.692080   53438 ssh_runner.go:195] Run: grep 192.168.61.45	control-plane.minikube.internal$ /etc/hosts
	I0803 23:59:42.696530   53438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.45	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:59:42.709485   53438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:59:42.843766   53438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:59:42.865331   53438 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198 for IP: 192.168.61.45
	I0803 23:59:42.865372   53438 certs.go:194] generating shared ca certs ...
	I0803 23:59:42.865392   53438 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:42.865577   53438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0803 23:59:42.865647   53438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0803 23:59:42.865663   53438 certs.go:256] generating profile certs ...
	I0803 23:59:42.865745   53438 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/client.key
	I0803 23:59:42.865764   53438 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/client.crt with IP's: []
	I0803 23:59:43.213230   53438 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/client.crt ...
	I0803 23:59:43.213271   53438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/client.crt: {Name:mkdc4dca56455ccec02040764cd038d866d6d40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:43.213487   53438 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/client.key ...
	I0803 23:59:43.213516   53438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/client.key: {Name:mk1490d1063f2a5c6f15781cbe6af0c1324c15ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:43.213644   53438 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.key.a2dad221
	I0803 23:59:43.213673   53438 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.crt.a2dad221 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.45]
	I0803 23:59:43.485138   53438 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.crt.a2dad221 ...
	I0803 23:59:43.485166   53438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.crt.a2dad221: {Name:mk3620b14aa3bb9ae0a9f235e3237d92bd307ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:43.485334   53438 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.key.a2dad221 ...
	I0803 23:59:43.485370   53438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.key.a2dad221: {Name:mkad0b8be9bf1af1aefb90b8de42779802d92e13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:43.485488   53438 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.crt.a2dad221 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.crt
	I0803 23:59:43.485588   53438 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.key.a2dad221 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.key
	I0803 23:59:43.485667   53438 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/proxy-client.key
	I0803 23:59:43.485691   53438 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/proxy-client.crt with IP's: []
	I0803 23:59:43.791489   53438 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/proxy-client.crt ...
	I0803 23:59:43.791517   53438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/proxy-client.crt: {Name:mk76fa673817a0700b7c630fa6790363fa420ae8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:43.791695   53438 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/proxy-client.key ...
	I0803 23:59:43.791713   53438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/proxy-client.key: {Name:mk51b7ab908e22aa79355845808b1f469dad61d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:43.791929   53438 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0803 23:59:43.791970   53438 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0803 23:59:43.791985   53438 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0803 23:59:43.792016   53438 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0803 23:59:43.792045   53438 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:59:43.792079   53438 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0803 23:59:43.792139   53438 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0803 23:59:43.792748   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:59:43.833012   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:59:43.871199   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:59:43.916390   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0803 23:59:43.953689   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0803 23:59:43.987338   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 23:59:44.018464   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:59:44.061182   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0803 23:59:44.094615   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0803 23:59:44.128494   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0803 23:59:44.158856   53438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:59:44.192341   53438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:59:44.214869   53438 ssh_runner.go:195] Run: openssl version
	I0803 23:59:44.221129   53438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0803 23:59:44.233126   53438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0803 23:59:44.239786   53438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0803 23:59:44.239850   53438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0803 23:59:44.247640   53438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:59:44.262830   53438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:59:44.274343   53438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:59:44.279684   53438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:59:44.279746   53438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:59:44.285590   53438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:59:44.297176   53438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0803 23:59:44.309295   53438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0803 23:59:44.314635   53438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0803 23:59:44.314704   53438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0803 23:59:44.321488   53438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0803 23:59:44.333427   53438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:59:44.337785   53438 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:59:44.337854   53438 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-302198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-302198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.45 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:59:44.337946   53438 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:59:44.338018   53438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:59:44.379265   53438 cri.go:89] found id: ""
	I0803 23:59:44.379345   53438 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 23:59:44.390896   53438 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 23:59:44.402782   53438 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 23:59:44.413930   53438 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 23:59:44.413954   53438 kubeadm.go:157] found existing configuration files:
	
	I0803 23:59:44.413995   53438 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0803 23:59:44.423671   53438 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 23:59:44.423727   53438 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 23:59:44.434221   53438 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0803 23:59:44.443941   53438 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 23:59:44.444019   53438 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 23:59:44.454402   53438 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0803 23:59:44.464068   53438 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 23:59:44.464151   53438 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 23:59:44.474597   53438 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0803 23:59:44.485516   53438 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 23:59:44.485585   53438 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 23:59:44.496875   53438 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 23:59:44.817459   53438 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:01:42.248263   53438 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:01:42.248353   53438 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:42.250233   53438 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:01:42.250312   53438 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:01:42.250383   53438 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:01:42.250501   53438 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:01:42.250621   53438 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:01:42.250708   53438 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:01:42.413057   53438 out.go:204]   - Generating certificates and keys ...
	I0804 00:01:42.413216   53438 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:01:42.413302   53438 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:01:42.413411   53438 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 00:01:42.413490   53438 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 00:01:42.413588   53438 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 00:01:42.413681   53438 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 00:01:42.413768   53438 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 00:01:42.413927   53438 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-302198 localhost] and IPs [192.168.61.45 127.0.0.1 ::1]
	I0804 00:01:42.414001   53438 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 00:01:42.414164   53438 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-302198 localhost] and IPs [192.168.61.45 127.0.0.1 ::1]
	I0804 00:01:42.414251   53438 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 00:01:42.414335   53438 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 00:01:42.414395   53438 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 00:01:42.414490   53438 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:01:42.414579   53438 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:01:42.414663   53438 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:01:42.414746   53438 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:01:42.414972   53438 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:01:42.415139   53438 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:01:42.415266   53438 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:01:42.415324   53438 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:01:42.415394   53438 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:01:42.552622   53438 out.go:204]   - Booting up control plane ...
	I0804 00:01:42.552782   53438 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:01:42.552877   53438 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:01:42.552992   53438 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:01:42.553123   53438 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:01:42.553344   53438 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:01:42.553431   53438 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:01:42.553540   53438 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:01:42.553770   53438 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:01:42.553873   53438 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:01:42.554115   53438 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:01:42.554216   53438 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:01:42.554461   53438 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:01:42.554557   53438 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:01:42.554795   53438 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:01:42.554914   53438 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:01:42.555103   53438 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:01:42.555113   53438 kubeadm.go:310] 
	I0804 00:01:42.555169   53438 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:01:42.555226   53438 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:01:42.555235   53438 kubeadm.go:310] 
	I0804 00:01:42.555285   53438 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:01:42.555333   53438 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:01:42.555449   53438 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:01:42.555458   53438 kubeadm.go:310] 
	I0804 00:01:42.555573   53438 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:01:42.555621   53438 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:01:42.555677   53438 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:01:42.555686   53438 kubeadm.go:310] 
	I0804 00:01:42.555812   53438 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:01:42.555920   53438 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:01:42.555930   53438 kubeadm.go:310] 
	I0804 00:01:42.556044   53438 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:01:42.556151   53438 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:01:42.556281   53438 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:01:42.556371   53438 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0804 00:01:42.556506   53438 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-302198 localhost] and IPs [192.168.61.45 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-302198 localhost] and IPs [192.168.61.45 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-302198 localhost] and IPs [192.168.61.45 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-302198 localhost] and IPs [192.168.61.45 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 00:01:42.556560   53438 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:01:42.556840   53438 kubeadm.go:310] 
	I0804 00:01:43.048079   53438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:01:43.065755   53438 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:01:43.078023   53438 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:01:43.078046   53438 kubeadm.go:157] found existing configuration files:
	
	I0804 00:01:43.078102   53438 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:01:43.089975   53438 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:01:43.090051   53438 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:01:43.102385   53438 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:01:43.113914   53438 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:01:43.113994   53438 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:01:43.125864   53438 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:01:43.137210   53438 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:01:43.137298   53438 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:01:43.148755   53438 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:01:43.159976   53438 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:01:43.160049   53438 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:01:43.173092   53438 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:01:43.250521   53438 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:01:43.250645   53438 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:01:43.401442   53438 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:01:43.401561   53438 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:01:43.401707   53438 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:01:43.611082   53438 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:01:43.809132   53438 out.go:204]   - Generating certificates and keys ...
	I0804 00:01:43.809290   53438 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:01:43.809408   53438 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:01:43.809514   53438 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:01:43.809708   53438 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:01:43.809820   53438 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:01:43.809900   53438 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:01:43.810008   53438 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:01:43.810104   53438 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:01:43.810217   53438 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:01:43.810303   53438 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:01:43.810338   53438 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:01:43.810427   53438 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:01:43.810508   53438 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:01:43.981030   53438 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:01:44.036247   53438 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:01:44.382736   53438 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:01:44.405094   53438 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:01:44.405568   53438 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:01:44.405631   53438 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:01:44.615388   53438 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:01:44.616992   53438 out.go:204]   - Booting up control plane ...
	I0804 00:01:44.617228   53438 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:01:44.630531   53438 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:01:44.631952   53438 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:01:44.633263   53438 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:01:44.641396   53438 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:02:24.642289   53438 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:02:24.642411   53438 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:02:24.642647   53438 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:02:29.643464   53438 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:02:29.643730   53438 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:02:39.644029   53438 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:02:39.644331   53438 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:02:59.645200   53438 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:02:59.645585   53438 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:03:39.647382   53438 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:03:39.647626   53438 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:03:39.647637   53438 kubeadm.go:310] 
	I0804 00:03:39.647697   53438 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:03:39.647744   53438 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:03:39.647752   53438 kubeadm.go:310] 
	I0804 00:03:39.647797   53438 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:03:39.647837   53438 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:03:39.647968   53438 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:03:39.647979   53438 kubeadm.go:310] 
	I0804 00:03:39.648136   53438 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:03:39.648183   53438 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:03:39.648235   53438 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:03:39.648244   53438 kubeadm.go:310] 
	I0804 00:03:39.648363   53438 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:03:39.648465   53438 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:03:39.648475   53438 kubeadm.go:310] 
	I0804 00:03:39.648595   53438 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:03:39.648730   53438 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:03:39.648856   53438 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:03:39.648956   53438 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:03:39.648968   53438 kubeadm.go:310] 
	I0804 00:03:39.650000   53438 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:03:39.650118   53438 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:03:39.650222   53438 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:03:39.650301   53438 kubeadm.go:394] duration metric: took 3m55.312452772s to StartCluster
	I0804 00:03:39.650360   53438 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:03:39.650422   53438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:03:39.707507   53438 cri.go:89] found id: ""
	I0804 00:03:39.707530   53438 logs.go:276] 0 containers: []
	W0804 00:03:39.707540   53438 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:03:39.707548   53438 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:03:39.707610   53438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:03:39.752687   53438 cri.go:89] found id: ""
	I0804 00:03:39.752714   53438 logs.go:276] 0 containers: []
	W0804 00:03:39.752733   53438 logs.go:278] No container was found matching "etcd"
	I0804 00:03:39.752740   53438 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:03:39.752800   53438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:03:39.795815   53438 cri.go:89] found id: ""
	I0804 00:03:39.795839   53438 logs.go:276] 0 containers: []
	W0804 00:03:39.795848   53438 logs.go:278] No container was found matching "coredns"
	I0804 00:03:39.795854   53438 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:03:39.795924   53438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:03:39.848904   53438 cri.go:89] found id: ""
	I0804 00:03:39.848932   53438 logs.go:276] 0 containers: []
	W0804 00:03:39.848942   53438 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:03:39.848949   53438 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:03:39.849010   53438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:03:39.896685   53438 cri.go:89] found id: ""
	I0804 00:03:39.896716   53438 logs.go:276] 0 containers: []
	W0804 00:03:39.896727   53438 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:03:39.896735   53438 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:03:39.896806   53438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:03:39.942462   53438 cri.go:89] found id: ""
	I0804 00:03:39.942489   53438 logs.go:276] 0 containers: []
	W0804 00:03:39.942498   53438 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:03:39.942506   53438 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:03:39.942561   53438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:03:39.986524   53438 cri.go:89] found id: ""
	I0804 00:03:39.986551   53438 logs.go:276] 0 containers: []
	W0804 00:03:39.986561   53438 logs.go:278] No container was found matching "kindnet"
	I0804 00:03:39.986571   53438 logs.go:123] Gathering logs for kubelet ...
	I0804 00:03:39.986585   53438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:03:40.046311   53438 logs.go:123] Gathering logs for dmesg ...
	I0804 00:03:40.046352   53438 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:03:40.062058   53438 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:03:40.062086   53438 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:03:40.213523   53438 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:03:40.213549   53438 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:03:40.213564   53438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:03:40.341367   53438 logs.go:123] Gathering logs for container status ...
	I0804 00:03:40.341454   53438 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 00:03:40.391049   53438 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0804 00:03:40.391098   53438 out.go:239] * 
	* 
	W0804 00:03:40.391153   53438 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:03:40.391187   53438 out.go:239] * 
	* 
	W0804 00:03:40.392322   53438 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:03:40.396034   53438 out.go:177] 
	W0804 00:03:40.397323   53438 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:03:40.397393   53438 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0804 00:03:40.397418   53438 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0804 00:03:40.399045   53438 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-302198 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-302198
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-302198: (1.597818057s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-302198 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-302198 status --format={{.Host}}: exit status 7 (88.097236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302198 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-302198 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m19.098507738s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-302198 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302198 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-302198 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (85.796612ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-302198] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-302198
	    minikube start -p kubernetes-upgrade-302198 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3021982 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-302198 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-302198 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-302198 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.687004182s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-04 00:05:44.088917578 +0000 UTC m=+4680.031161626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-302198 -n kubernetes-upgrade-302198
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-302198 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-302198 logs -n 25: (1.613980869s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-159277 sudo                                | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo                                | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo cat                            | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo cat                            | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo                                | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo                                | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo                                | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo cat                            | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo cat                            | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo                                | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo                                | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo                                | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo find                           | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-159277 sudo crio                           | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-159277                                     | cilium-159277             | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC | 04 Aug 24 00:03 UTC |
	| start   | -p old-k8s-version-576210                            | old-k8s-version-576210    | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-551054                               | NoKubernetes-551054       | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC | 04 Aug 24 00:04 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-551054                               | NoKubernetes-551054       | jenkins | v1.33.1 | 04 Aug 24 00:04 UTC | 04 Aug 24 00:04 UTC |
	| start   | -p NoKubernetes-551054                               | NoKubernetes-551054       | jenkins | v1.33.1 | 04 Aug 24 00:04 UTC | 04 Aug 24 00:05 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302198                         | kubernetes-upgrade-302198 | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302198                         | kubernetes-upgrade-302198 | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                    |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-551054 sudo                          | NoKubernetes-551054       | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| start   | -p cert-expiration-705918                            | cert-expiration-705918    | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                              |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-551054                               | NoKubernetes-551054       | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p NoKubernetes-551054                               | NoKubernetes-551054       | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:05:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:05:24.254416   61662 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:05:24.254657   61662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:05:24.254660   61662 out.go:304] Setting ErrFile to fd 2...
	I0804 00:05:24.254663   61662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:05:24.254816   61662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:05:24.255294   61662 out.go:298] Setting JSON to false
	I0804 00:05:24.256197   61662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6468,"bootTime":1722723456,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:05:24.256241   61662 start.go:139] virtualization: kvm guest
	I0804 00:05:24.258330   61662 out.go:177] * [NoKubernetes-551054] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:05:24.259723   61662 notify.go:220] Checking for updates...
	I0804 00:05:24.259731   61662 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:05:24.261136   61662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:05:24.262560   61662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:05:24.263935   61662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:05:24.265130   61662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:05:24.266415   61662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:05:24.268001   61662 config.go:182] Loaded profile config "NoKubernetes-551054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0804 00:05:24.268351   61662 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:05:24.268391   61662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:05:24.282927   61662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35411
	I0804 00:05:24.283276   61662 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:05:24.283820   61662 main.go:141] libmachine: Using API Version  1
	I0804 00:05:24.283828   61662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:05:24.284164   61662 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:05:24.284374   61662 main.go:141] libmachine: (NoKubernetes-551054) Calling .DriverName
	I0804 00:05:24.284606   61662 start.go:1783] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0804 00:05:24.284624   61662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:05:24.284900   61662 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:05:24.284927   61662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:05:24.299438   61662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I0804 00:05:24.299918   61662 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:05:24.300369   61662 main.go:141] libmachine: Using API Version  1
	I0804 00:05:24.300392   61662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:05:24.300707   61662 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:05:24.300898   61662 main.go:141] libmachine: (NoKubernetes-551054) Calling .DriverName
	I0804 00:05:24.336012   61662 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:05:24.337350   61662 start.go:297] selected driver: kvm2
	I0804 00:05:24.337370   61662 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-551054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-551054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.201 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:05:24.337466   61662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:05:24.337776   61662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:05:24.337829   61662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:05:24.353236   61662 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:05:24.353974   61662 cni.go:84] Creating CNI manager for ""
	I0804 00:05:24.353984   61662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:05:24.354039   61662 start.go:340] cluster config:
	{Name:NoKubernetes-551054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-551054 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.201 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:05:24.354132   61662 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:05:24.355951   61662 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-551054
	I0804 00:05:19.904003   61416 machine.go:94] provisionDockerMachine start ...
	I0804 00:05:19.904019   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .DriverName
	I0804 00:05:19.904269   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHHostname
	I0804 00:05:19.907017   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:19.907363   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:48:8c", ip: ""} in network mk-cert-expiration-705918: {Iface:virbr2 ExpiryTime:2024-08-04 01:01:50 +0000 UTC Type:0 Mac:52:54:00:33:48:8c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:cert-expiration-705918 Clientid:01:52:54:00:33:48:8c}
	I0804 00:05:19.907382   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined IP address 192.168.39.231 and MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:19.907600   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHPort
	I0804 00:05:19.907770   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHKeyPath
	I0804 00:05:19.907930   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHKeyPath
	I0804 00:05:19.908081   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHUsername
	I0804 00:05:19.908269   61416 main.go:141] libmachine: Using SSH client type: native
	I0804 00:05:19.908526   61416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0804 00:05:19.908534   61416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:05:20.014045   61416 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-705918
	
	I0804 00:05:20.014060   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetMachineName
	I0804 00:05:20.014359   61416 buildroot.go:166] provisioning hostname "cert-expiration-705918"
	I0804 00:05:20.014376   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetMachineName
	I0804 00:05:20.014588   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHHostname
	I0804 00:05:20.017836   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:20.018220   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:48:8c", ip: ""} in network mk-cert-expiration-705918: {Iface:virbr2 ExpiryTime:2024-08-04 01:01:50 +0000 UTC Type:0 Mac:52:54:00:33:48:8c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:cert-expiration-705918 Clientid:01:52:54:00:33:48:8c}
	I0804 00:05:20.018240   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined IP address 192.168.39.231 and MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:20.018405   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHPort
	I0804 00:05:20.018589   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHKeyPath
	I0804 00:05:20.018715   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHKeyPath
	I0804 00:05:20.018841   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHUsername
	I0804 00:05:20.019014   61416 main.go:141] libmachine: Using SSH client type: native
	I0804 00:05:20.019207   61416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0804 00:05:20.019218   61416 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-705918 && echo "cert-expiration-705918" | sudo tee /etc/hostname
	I0804 00:05:20.140561   61416 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-705918
	
	I0804 00:05:20.140579   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHHostname
	I0804 00:05:20.143384   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:20.143709   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:48:8c", ip: ""} in network mk-cert-expiration-705918: {Iface:virbr2 ExpiryTime:2024-08-04 01:01:50 +0000 UTC Type:0 Mac:52:54:00:33:48:8c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:cert-expiration-705918 Clientid:01:52:54:00:33:48:8c}
	I0804 00:05:20.143739   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined IP address 192.168.39.231 and MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:20.143923   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHPort
	I0804 00:05:20.144131   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHKeyPath
	I0804 00:05:20.144290   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHKeyPath
	I0804 00:05:20.144432   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHUsername
	I0804 00:05:20.144605   61416 main.go:141] libmachine: Using SSH client type: native
	I0804 00:05:20.144780   61416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0804 00:05:20.144791   61416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-705918' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-705918/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-705918' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:05:20.254956   61416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:05:20.254976   61416 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:05:20.255017   61416 buildroot.go:174] setting up certificates
	I0804 00:05:20.255027   61416 provision.go:84] configureAuth start
	I0804 00:05:20.255038   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetMachineName
	I0804 00:05:20.255379   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetIP
	I0804 00:05:20.258389   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:20.258776   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:48:8c", ip: ""} in network mk-cert-expiration-705918: {Iface:virbr2 ExpiryTime:2024-08-04 01:01:50 +0000 UTC Type:0 Mac:52:54:00:33:48:8c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:cert-expiration-705918 Clientid:01:52:54:00:33:48:8c}
	I0804 00:05:20.258797   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined IP address 192.168.39.231 and MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:20.258994   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHHostname
	I0804 00:05:20.261654   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:20.262073   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:48:8c", ip: ""} in network mk-cert-expiration-705918: {Iface:virbr2 ExpiryTime:2024-08-04 01:01:50 +0000 UTC Type:0 Mac:52:54:00:33:48:8c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:cert-expiration-705918 Clientid:01:52:54:00:33:48:8c}
	I0804 00:05:20.262093   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined IP address 192.168.39.231 and MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:20.262256   61416 provision.go:143] copyHostCerts
	I0804 00:05:20.262335   61416 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:05:20.262342   61416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:05:20.262408   61416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:05:20.262532   61416 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:05:20.262537   61416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:05:20.262565   61416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:05:20.262637   61416 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:05:20.262640   61416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:05:20.262658   61416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:05:20.262710   61416 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-705918 san=[127.0.0.1 192.168.39.231 cert-expiration-705918 localhost minikube]
	I0804 00:05:20.347338   61416 provision.go:177] copyRemoteCerts
	I0804 00:05:20.347384   61416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:05:20.347404   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHHostname
	I0804 00:05:20.350601   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:20.351004   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:48:8c", ip: ""} in network mk-cert-expiration-705918: {Iface:virbr2 ExpiryTime:2024-08-04 01:01:50 +0000 UTC Type:0 Mac:52:54:00:33:48:8c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:cert-expiration-705918 Clientid:01:52:54:00:33:48:8c}
	I0804 00:05:20.351027   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined IP address 192.168.39.231 and MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:20.351309   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHPort
	I0804 00:05:20.351561   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHKeyPath
	I0804 00:05:20.351787   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHUsername
	I0804 00:05:20.351946   61416 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/cert-expiration-705918/id_rsa Username:docker}
	I0804 00:05:20.440621   61416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:05:20.475196   61416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0804 00:05:20.506433   61416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:05:20.533822   61416 provision.go:87] duration metric: took 278.782643ms to configureAuth
	I0804 00:05:20.533846   61416 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:05:20.534050   61416 config.go:182] Loaded profile config "cert-expiration-705918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:05:20.534156   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHHostname
	I0804 00:05:20.537247   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:20.537659   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:48:8c", ip: ""} in network mk-cert-expiration-705918: {Iface:virbr2 ExpiryTime:2024-08-04 01:01:50 +0000 UTC Type:0 Mac:52:54:00:33:48:8c Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:cert-expiration-705918 Clientid:01:52:54:00:33:48:8c}
	I0804 00:05:20.537684   61416 main.go:141] libmachine: (cert-expiration-705918) DBG | domain cert-expiration-705918 has defined IP address 192.168.39.231 and MAC address 52:54:00:33:48:8c in network mk-cert-expiration-705918
	I0804 00:05:20.537912   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHPort
	I0804 00:05:20.538133   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHKeyPath
	I0804 00:05:20.538316   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHKeyPath
	I0804 00:05:20.538496   61416 main.go:141] libmachine: (cert-expiration-705918) Calling .GetSSHUsername
	I0804 00:05:20.538657   61416 main.go:141] libmachine: Using SSH client type: native
	I0804 00:05:20.538897   61416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0804 00:05:20.538913   61416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:05:21.482923   61208 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:05:21.755567   61208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:05:21.797853   61208 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:05:21.797874   61208 crio.go:433] Images already preloaded, skipping extraction
	I0804 00:05:21.797923   61208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:05:21.833668   61208 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:05:21.833689   61208 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:05:21.833697   61208 kubeadm.go:934] updating node { 192.168.61.45 8443 v1.31.0-rc.0 crio true true} ...
	I0804 00:05:21.833810   61208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-302198 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-302198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:05:21.833888   61208 ssh_runner.go:195] Run: crio config
	I0804 00:05:21.881324   61208 cni.go:84] Creating CNI manager for ""
	I0804 00:05:21.881346   61208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:05:21.881363   61208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:05:21.881385   61208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.45 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-302198 NodeName:kubernetes-upgrade-302198 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:05:21.881509   61208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-302198"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.45
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.45"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:05:21.881567   61208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0804 00:05:21.891728   61208 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:05:21.891804   61208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:05:21.901222   61208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I0804 00:05:21.918992   61208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0804 00:05:21.935864   61208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0804 00:05:21.953256   61208 ssh_runner.go:195] Run: grep 192.168.61.45	control-plane.minikube.internal$ /etc/hosts
	I0804 00:05:21.957441   61208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:05:22.118226   61208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:05:22.134711   61208 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198 for IP: 192.168.61.45
	I0804 00:05:22.134734   61208 certs.go:194] generating shared ca certs ...
	I0804 00:05:22.134751   61208 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:05:22.134905   61208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:05:22.134960   61208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:05:22.134979   61208 certs.go:256] generating profile certs ...
	I0804 00:05:22.135130   61208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/client.key
	I0804 00:05:22.135192   61208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.key.a2dad221
	I0804 00:05:22.135242   61208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/proxy-client.key
	I0804 00:05:22.135384   61208 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:05:22.135424   61208 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:05:22.135439   61208 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:05:22.135474   61208 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:05:22.135507   61208 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:05:22.135545   61208 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:05:22.135600   61208 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:05:22.136171   61208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:05:22.159685   61208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:05:22.184708   61208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:05:22.211448   61208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:05:22.241284   61208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0804 00:05:22.266186   61208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:05:22.290220   61208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:05:22.314733   61208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kubernetes-upgrade-302198/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:05:22.346291   61208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:05:22.375961   61208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:05:22.411205   61208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:05:22.437010   61208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:05:22.457815   61208 ssh_runner.go:195] Run: openssl version
	I0804 00:05:22.464883   61208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:05:22.476520   61208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:05:22.481556   61208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:05:22.481605   61208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:05:22.487544   61208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:05:22.497789   61208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:05:22.509863   61208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:05:22.514712   61208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:05:22.514768   61208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:05:22.520861   61208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:05:22.531843   61208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:05:22.544602   61208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:05:22.550396   61208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:05:22.550451   61208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:05:22.556601   61208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:05:22.567983   61208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:05:22.573374   61208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:05:22.581171   61208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:05:22.588278   61208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:05:22.596259   61208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:05:22.602669   61208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:05:22.608779   61208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:05:22.615137   61208 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-302198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-302198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.45 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:05:22.615245   61208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:05:22.615307   61208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:05:22.651804   61208 cri.go:89] found id: "0cc13c3506b087693fdc2de55c2c919d2d00858633348d0d25c8fb775bdedea0"
	I0804 00:05:22.651827   61208 cri.go:89] found id: "0b9c7f5dba7428c3622e7175d534d97dac2fad9105f1b945436a1fe76b258b04"
	I0804 00:05:22.651833   61208 cri.go:89] found id: "65cab711a245f05094c24f47ffadbd0dd68c8b36be75a2f743a939bd8449d24a"
	I0804 00:05:22.651838   61208 cri.go:89] found id: "16590c60ec5e6913dbd8207dcbc54d61193b5e2ef38ad31325113cc8668ba2b3"
	I0804 00:05:22.651842   61208 cri.go:89] found id: "b287374d7bcf9c6a5b87064ef5f9822a3f6aa9d05f1e5cabc1762fe54ae8f8fe"
	I0804 00:05:22.651847   61208 cri.go:89] found id: "34479e6ba02499dd134cef1651b8f14a02f3b4c44bc672b0e311d6a2f3719cd0"
	I0804 00:05:22.651850   61208 cri.go:89] found id: "b357f3fadd4d1373e09bc14d8e7e13d0429feb6f5728cad882b6ef2779f66405"
	I0804 00:05:22.651854   61208 cri.go:89] found id: "1d0597b1facbd1f2fea136efebc47f0bfd03ac5cddc5b778598470b187ab2ad4"
	I0804 00:05:22.651857   61208 cri.go:89] found id: "28a3f5d04b73af64cf4e320ae1bca5a2d483e57f0e4cbeb2c8b66d79e8827c22"
	I0804 00:05:22.651865   61208 cri.go:89] found id: "196bab39d51f3ee60ba522009cf63d1700112671055fd411b4f0ccba2ba10b44"
	I0804 00:05:22.651869   61208 cri.go:89] found id: ""
	I0804 00:05:22.651924   61208 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.797192781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729944797173151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9da81591-54c0-492f-ada4-1a82598fd627 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.797693254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e75db41a-999c-4b80-8394-f3aa03561079 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.797751147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e75db41a-999c-4b80-8394-f3aa03561079 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.798064939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c5bd5d102d837156680fbad8a2d48c33bc5c0e6098b7ead00ca72d7d72af30f,PodSandboxId:9aaa799f85955eda1a001304223f0a29bcb3a44645af65b6bd0d2fa910df8447,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729941206828255,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac02ab0-6e94-4bf6-a823-116bb3092096,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153ed2cdd27fe21e4d4c3b25d2b52fb7f74352cfd8b97c02ba9542ad1ac8657c,PodSandboxId:3b1b4ed184b39e844a1dfa41522a4f17e03edc8436983937bf12e70408792bff,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729941200674402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xcnkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 843df706-b83a-4b8a-8394-986e2bad5fd5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f4e1d7f9f1f468c130abe7435135c6163e4fe96b42e759eb7a69db059ef0f36,PodSandboxId:f169a6143d1827cef15ba7d5ed2073337ae73fe89db8ed0046c7fb783c444589,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729941183723187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kz5wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: c6d4a7cf-9cc3-4351-beb8-1b5385eca115,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a26a99db526cd2be4a425f3ff318827d05a8dc02c0148ded56f025244fb23f7,PodSandboxId:91559e267b67fb05d91d726820af47c1689154590a791ef004696fbc30a5ead9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNI
NG,CreatedAt:1722729938536603143,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7580356846c4e75380dddc63538d5408,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec8fc87acf117245a66bacadf8bcdbc06637a3976692f4e529ef7b1c6d9ae5d,PodSandboxId:6da5fcc022736e32bd63032cc88f61b1de4f6b677443263248710a7833327658,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,Cr
eatedAt:1722729938519962760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08673eb4ad68950f579dcdd807d9a62a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5339f3512ff4fc5150813e2e49d7af09cb7d4b7dcc823c0f536070319e30ffcf,PodSandboxId:9c77286509d8828d5a1fd7d423851184374b5f0f8381fcf855fc40d48f43ef1d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNIN
G,CreatedAt:1722729938518670696,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a981fef2d4967395ec86acda349b2578,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2878fa12a3da8a0db76eeaec48bb72d21c75ee658faa5d935fc10f98ede84637,PodSandboxId:058f870c2455f549c5db2345518b2d06ce12d2d6e957eb9a8493755e99dbea46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAIN
ER_RUNNING,CreatedAt:1722729933793374463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bgdmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8221f21f-9e58-42f9-a57b-f77a4fb8bfe5,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f41b54cb95766a3a230bb35dff8c84f9283d36589ccb997565f9631690d653,PodSandboxId:9aaa799f85955eda1a001304223f0a29bcb3a44645af65b6bd0d2fa910df8447,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722
729933783608454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac02ab0-6e94-4bf6-a823-116bb3092096,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ac4849276e16e2831d4e7b672838da4cae7e014bf7bb3018e2f10d95b5a0c96,PodSandboxId:70169dd9c62214e127d0963fd4db1f6558d7b1f825f791e97def7ff861f9a1be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722729930109763564,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e23ba150da6a65f8aea36aae8b1ca4f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc13c3506b087693fdc2de55c2c919d2d00858633348d0d25c8fb775bdedea0,PodSandboxId:f169a6143d1827cef15ba7d5ed2073337ae73fe89db8ed0046c7fb783c444589,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729920990356464,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kz5wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d4a7cf-9cc3-4351-beb8-1b5385eca115,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9c7f5dba7428c3622e7175d534d97dac2fad9105f1b945436a1fe76b258b04,PodSandboxId:3b1b4ed184b39e844a1dfa41522a4f17e03edc8436983937bf12e70408792bff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729920954601551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xcnkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 843df706-b83a-4b8a-8394-986e2bad5fd5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16590c60ec5e6913dbd8207dcbc54d61193b5e2ef38ad31325113cc8668ba2b3,PodSandboxId:9cd93467ec7935c5c7f5bb5b0cc76e5e173cc137a937de92d62e76bc1c8
aa71b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722729918639286873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bgdmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8221f21f-9e58-42f9-a57b-f77a4fb8bfe5,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cab711a245f05094c24f47ffadbd0dd68c8b36be75a2f743a939bd8449d24a,PodSandboxId:3009ba7f71cd36a6b7298acfb4a17f7dac4fecf74f519d591de36b36ada6c7e8,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722729918676398028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08673eb4ad68950f579dcdd807d9a62a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b357f3fadd4d1373e09bc14d8e7e13d0429feb6f5728cad882b6ef2779f66405,PodSandboxId:5a4c1535efb9e40a996a2344ef86a494ca97b581b2b2c97b89b8f83c05214aad,Metadata:&ContainerMetadata{Name:k
ube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722729918451026061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a981fef2d4967395ec86acda349b2578,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34479e6ba02499dd134cef1651b8f14a02f3b4c44bc672b0e311d6a2f3719cd0,PodSandboxId:47a6a439d9e219c0c37d919688e14532ee53588024547ceb103ea68b8313ed1f,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722729918458456290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e23ba150da6a65f8aea36aae8b1ca4f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0597b1facbd1f2fea136efebc47f0bfd03ac5cddc5b778598470b187ab2ad4,PodSandboxId:2644388205a5d120e7d9686754fdec30e7c7f3b32bdad399e5714c3f6346a29c,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722729918375474023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7580356846c4e75380dddc63538d5408,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e75db41a-999c-4b80-8394-f3aa03561079 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.847466416Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63b83cfc-e6bf-41a2-894e-9ae07e7e09ec name=/runtime.v1.RuntimeService/Version
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.847542928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63b83cfc-e6bf-41a2-894e-9ae07e7e09ec name=/runtime.v1.RuntimeService/Version
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.848962048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ded0e27-5391-4bd2-b0d3-a3f8f4e0ba9d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.849576942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729944849550901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ded0e27-5391-4bd2-b0d3-a3f8f4e0ba9d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.850167042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecb7a32c-3e44-45b5-b06a-69fdcfce5014 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.850267763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecb7a32c-3e44-45b5-b06a-69fdcfce5014 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.850573854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c5bd5d102d837156680fbad8a2d48c33bc5c0e6098b7ead00ca72d7d72af30f,PodSandboxId:9aaa799f85955eda1a001304223f0a29bcb3a44645af65b6bd0d2fa910df8447,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729941206828255,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac02ab0-6e94-4bf6-a823-116bb3092096,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153ed2cdd27fe21e4d4c3b25d2b52fb7f74352cfd8b97c02ba9542ad1ac8657c,PodSandboxId:3b1b4ed184b39e844a1dfa41522a4f17e03edc8436983937bf12e70408792bff,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729941200674402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xcnkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 843df706-b83a-4b8a-8394-986e2bad5fd5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f4e1d7f9f1f468c130abe7435135c6163e4fe96b42e759eb7a69db059ef0f36,PodSandboxId:f169a6143d1827cef15ba7d5ed2073337ae73fe89db8ed0046c7fb783c444589,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729941183723187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kz5wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: c6d4a7cf-9cc3-4351-beb8-1b5385eca115,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a26a99db526cd2be4a425f3ff318827d05a8dc02c0148ded56f025244fb23f7,PodSandboxId:91559e267b67fb05d91d726820af47c1689154590a791ef004696fbc30a5ead9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNI
NG,CreatedAt:1722729938536603143,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7580356846c4e75380dddc63538d5408,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec8fc87acf117245a66bacadf8bcdbc06637a3976692f4e529ef7b1c6d9ae5d,PodSandboxId:6da5fcc022736e32bd63032cc88f61b1de4f6b677443263248710a7833327658,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,Cr
eatedAt:1722729938519962760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08673eb4ad68950f579dcdd807d9a62a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5339f3512ff4fc5150813e2e49d7af09cb7d4b7dcc823c0f536070319e30ffcf,PodSandboxId:9c77286509d8828d5a1fd7d423851184374b5f0f8381fcf855fc40d48f43ef1d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNIN
G,CreatedAt:1722729938518670696,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a981fef2d4967395ec86acda349b2578,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2878fa12a3da8a0db76eeaec48bb72d21c75ee658faa5d935fc10f98ede84637,PodSandboxId:058f870c2455f549c5db2345518b2d06ce12d2d6e957eb9a8493755e99dbea46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAIN
ER_RUNNING,CreatedAt:1722729933793374463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bgdmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8221f21f-9e58-42f9-a57b-f77a4fb8bfe5,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f41b54cb95766a3a230bb35dff8c84f9283d36589ccb997565f9631690d653,PodSandboxId:9aaa799f85955eda1a001304223f0a29bcb3a44645af65b6bd0d2fa910df8447,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722
729933783608454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac02ab0-6e94-4bf6-a823-116bb3092096,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ac4849276e16e2831d4e7b672838da4cae7e014bf7bb3018e2f10d95b5a0c96,PodSandboxId:70169dd9c62214e127d0963fd4db1f6558d7b1f825f791e97def7ff861f9a1be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722729930109763564,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e23ba150da6a65f8aea36aae8b1ca4f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc13c3506b087693fdc2de55c2c919d2d00858633348d0d25c8fb775bdedea0,PodSandboxId:f169a6143d1827cef15ba7d5ed2073337ae73fe89db8ed0046c7fb783c444589,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729920990356464,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kz5wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d4a7cf-9cc3-4351-beb8-1b5385eca115,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9c7f5dba7428c3622e7175d534d97dac2fad9105f1b945436a1fe76b258b04,PodSandboxId:3b1b4ed184b39e844a1dfa41522a4f17e03edc8436983937bf12e70408792bff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729920954601551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xcnkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 843df706-b83a-4b8a-8394-986e2bad5fd5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16590c60ec5e6913dbd8207dcbc54d61193b5e2ef38ad31325113cc8668ba2b3,PodSandboxId:9cd93467ec7935c5c7f5bb5b0cc76e5e173cc137a937de92d62e76bc1c8
aa71b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722729918639286873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bgdmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8221f21f-9e58-42f9-a57b-f77a4fb8bfe5,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cab711a245f05094c24f47ffadbd0dd68c8b36be75a2f743a939bd8449d24a,PodSandboxId:3009ba7f71cd36a6b7298acfb4a17f7dac4fecf74f519d591de36b36ada6c7e8,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722729918676398028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08673eb4ad68950f579dcdd807d9a62a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b357f3fadd4d1373e09bc14d8e7e13d0429feb6f5728cad882b6ef2779f66405,PodSandboxId:5a4c1535efb9e40a996a2344ef86a494ca97b581b2b2c97b89b8f83c05214aad,Metadata:&ContainerMetadata{Name:k
ube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722729918451026061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a981fef2d4967395ec86acda349b2578,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34479e6ba02499dd134cef1651b8f14a02f3b4c44bc672b0e311d6a2f3719cd0,PodSandboxId:47a6a439d9e219c0c37d919688e14532ee53588024547ceb103ea68b8313ed1f,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722729918458456290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e23ba150da6a65f8aea36aae8b1ca4f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0597b1facbd1f2fea136efebc47f0bfd03ac5cddc5b778598470b187ab2ad4,PodSandboxId:2644388205a5d120e7d9686754fdec30e7c7f3b32bdad399e5714c3f6346a29c,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722729918375474023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7580356846c4e75380dddc63538d5408,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ecb7a32c-3e44-45b5-b06a-69fdcfce5014 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.894669029Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca02df1c-99c5-4c59-9ec4-1f1e49f2de99 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.894944265Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca02df1c-99c5-4c59-9ec4-1f1e49f2de99 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.896051628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b4687ddb-27d6-4adc-aadf-13df7ef837f5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.896484314Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729944896460135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4687ddb-27d6-4adc-aadf-13df7ef837f5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.897117622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63e41220-eade-4979-b079-6b8521193d81 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.897239289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63e41220-eade-4979-b079-6b8521193d81 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.897598620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c5bd5d102d837156680fbad8a2d48c33bc5c0e6098b7ead00ca72d7d72af30f,PodSandboxId:9aaa799f85955eda1a001304223f0a29bcb3a44645af65b6bd0d2fa910df8447,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729941206828255,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac02ab0-6e94-4bf6-a823-116bb3092096,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153ed2cdd27fe21e4d4c3b25d2b52fb7f74352cfd8b97c02ba9542ad1ac8657c,PodSandboxId:3b1b4ed184b39e844a1dfa41522a4f17e03edc8436983937bf12e70408792bff,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729941200674402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xcnkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 843df706-b83a-4b8a-8394-986e2bad5fd5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f4e1d7f9f1f468c130abe7435135c6163e4fe96b42e759eb7a69db059ef0f36,PodSandboxId:f169a6143d1827cef15ba7d5ed2073337ae73fe89db8ed0046c7fb783c444589,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729941183723187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kz5wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: c6d4a7cf-9cc3-4351-beb8-1b5385eca115,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a26a99db526cd2be4a425f3ff318827d05a8dc02c0148ded56f025244fb23f7,PodSandboxId:91559e267b67fb05d91d726820af47c1689154590a791ef004696fbc30a5ead9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNI
NG,CreatedAt:1722729938536603143,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7580356846c4e75380dddc63538d5408,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec8fc87acf117245a66bacadf8bcdbc06637a3976692f4e529ef7b1c6d9ae5d,PodSandboxId:6da5fcc022736e32bd63032cc88f61b1de4f6b677443263248710a7833327658,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,Cr
eatedAt:1722729938519962760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08673eb4ad68950f579dcdd807d9a62a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5339f3512ff4fc5150813e2e49d7af09cb7d4b7dcc823c0f536070319e30ffcf,PodSandboxId:9c77286509d8828d5a1fd7d423851184374b5f0f8381fcf855fc40d48f43ef1d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNIN
G,CreatedAt:1722729938518670696,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a981fef2d4967395ec86acda349b2578,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2878fa12a3da8a0db76eeaec48bb72d21c75ee658faa5d935fc10f98ede84637,PodSandboxId:058f870c2455f549c5db2345518b2d06ce12d2d6e957eb9a8493755e99dbea46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAIN
ER_RUNNING,CreatedAt:1722729933793374463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bgdmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8221f21f-9e58-42f9-a57b-f77a4fb8bfe5,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f41b54cb95766a3a230bb35dff8c84f9283d36589ccb997565f9631690d653,PodSandboxId:9aaa799f85955eda1a001304223f0a29bcb3a44645af65b6bd0d2fa910df8447,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722
729933783608454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac02ab0-6e94-4bf6-a823-116bb3092096,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ac4849276e16e2831d4e7b672838da4cae7e014bf7bb3018e2f10d95b5a0c96,PodSandboxId:70169dd9c62214e127d0963fd4db1f6558d7b1f825f791e97def7ff861f9a1be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722729930109763564,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e23ba150da6a65f8aea36aae8b1ca4f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc13c3506b087693fdc2de55c2c919d2d00858633348d0d25c8fb775bdedea0,PodSandboxId:f169a6143d1827cef15ba7d5ed2073337ae73fe89db8ed0046c7fb783c444589,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729920990356464,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kz5wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d4a7cf-9cc3-4351-beb8-1b5385eca115,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9c7f5dba7428c3622e7175d534d97dac2fad9105f1b945436a1fe76b258b04,PodSandboxId:3b1b4ed184b39e844a1dfa41522a4f17e03edc8436983937bf12e70408792bff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729920954601551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xcnkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 843df706-b83a-4b8a-8394-986e2bad5fd5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16590c60ec5e6913dbd8207dcbc54d61193b5e2ef38ad31325113cc8668ba2b3,PodSandboxId:9cd93467ec7935c5c7f5bb5b0cc76e5e173cc137a937de92d62e76bc1c8
aa71b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722729918639286873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bgdmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8221f21f-9e58-42f9-a57b-f77a4fb8bfe5,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cab711a245f05094c24f47ffadbd0dd68c8b36be75a2f743a939bd8449d24a,PodSandboxId:3009ba7f71cd36a6b7298acfb4a17f7dac4fecf74f519d591de36b36ada6c7e8,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722729918676398028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08673eb4ad68950f579dcdd807d9a62a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b357f3fadd4d1373e09bc14d8e7e13d0429feb6f5728cad882b6ef2779f66405,PodSandboxId:5a4c1535efb9e40a996a2344ef86a494ca97b581b2b2c97b89b8f83c05214aad,Metadata:&ContainerMetadata{Name:k
ube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722729918451026061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a981fef2d4967395ec86acda349b2578,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34479e6ba02499dd134cef1651b8f14a02f3b4c44bc672b0e311d6a2f3719cd0,PodSandboxId:47a6a439d9e219c0c37d919688e14532ee53588024547ceb103ea68b8313ed1f,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722729918458456290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e23ba150da6a65f8aea36aae8b1ca4f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0597b1facbd1f2fea136efebc47f0bfd03ac5cddc5b778598470b187ab2ad4,PodSandboxId:2644388205a5d120e7d9686754fdec30e7c7f3b32bdad399e5714c3f6346a29c,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722729918375474023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7580356846c4e75380dddc63538d5408,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63e41220-eade-4979-b079-6b8521193d81 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.935176433Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcf558d2-4a1e-4515-b130-2c910d91c36b name=/runtime.v1.RuntimeService/Version
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.935316725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcf558d2-4a1e-4515-b130-2c910d91c36b name=/runtime.v1.RuntimeService/Version
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.936564511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87add15a-bb2e-4a78-9ac3-6bb3c7ffd479 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.936914457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729944936893586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87add15a-bb2e-4a78-9ac3-6bb3c7ffd479 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.937728141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26db26d6-dd78-447e-8f13-3db29ec82434 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.937783485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26db26d6-dd78-447e-8f13-3db29ec82434 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:44 kubernetes-upgrade-302198 crio[3004]: time="2024-08-04 00:05:44.938601057Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c5bd5d102d837156680fbad8a2d48c33bc5c0e6098b7ead00ca72d7d72af30f,PodSandboxId:9aaa799f85955eda1a001304223f0a29bcb3a44645af65b6bd0d2fa910df8447,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729941206828255,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac02ab0-6e94-4bf6-a823-116bb3092096,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153ed2cdd27fe21e4d4c3b25d2b52fb7f74352cfd8b97c02ba9542ad1ac8657c,PodSandboxId:3b1b4ed184b39e844a1dfa41522a4f17e03edc8436983937bf12e70408792bff,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729941200674402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xcnkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 843df706-b83a-4b8a-8394-986e2bad5fd5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f4e1d7f9f1f468c130abe7435135c6163e4fe96b42e759eb7a69db059ef0f36,PodSandboxId:f169a6143d1827cef15ba7d5ed2073337ae73fe89db8ed0046c7fb783c444589,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729941183723187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kz5wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: c6d4a7cf-9cc3-4351-beb8-1b5385eca115,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a26a99db526cd2be4a425f3ff318827d05a8dc02c0148ded56f025244fb23f7,PodSandboxId:91559e267b67fb05d91d726820af47c1689154590a791ef004696fbc30a5ead9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNI
NG,CreatedAt:1722729938536603143,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7580356846c4e75380dddc63538d5408,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec8fc87acf117245a66bacadf8bcdbc06637a3976692f4e529ef7b1c6d9ae5d,PodSandboxId:6da5fcc022736e32bd63032cc88f61b1de4f6b677443263248710a7833327658,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,Cr
eatedAt:1722729938519962760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08673eb4ad68950f579dcdd807d9a62a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5339f3512ff4fc5150813e2e49d7af09cb7d4b7dcc823c0f536070319e30ffcf,PodSandboxId:9c77286509d8828d5a1fd7d423851184374b5f0f8381fcf855fc40d48f43ef1d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNIN
G,CreatedAt:1722729938518670696,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a981fef2d4967395ec86acda349b2578,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2878fa12a3da8a0db76eeaec48bb72d21c75ee658faa5d935fc10f98ede84637,PodSandboxId:058f870c2455f549c5db2345518b2d06ce12d2d6e957eb9a8493755e99dbea46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAIN
ER_RUNNING,CreatedAt:1722729933793374463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bgdmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8221f21f-9e58-42f9-a57b-f77a4fb8bfe5,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f41b54cb95766a3a230bb35dff8c84f9283d36589ccb997565f9631690d653,PodSandboxId:9aaa799f85955eda1a001304223f0a29bcb3a44645af65b6bd0d2fa910df8447,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722
729933783608454,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac02ab0-6e94-4bf6-a823-116bb3092096,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ac4849276e16e2831d4e7b672838da4cae7e014bf7bb3018e2f10d95b5a0c96,PodSandboxId:70169dd9c62214e127d0963fd4db1f6558d7b1f825f791e97def7ff861f9a1be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722729930109763564,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e23ba150da6a65f8aea36aae8b1ca4f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cc13c3506b087693fdc2de55c2c919d2d00858633348d0d25c8fb775bdedea0,PodSandboxId:f169a6143d1827cef15ba7d5ed2073337ae73fe89db8ed0046c7fb783c444589,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729920990356464,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kz5wp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d4a7cf-9cc3-4351-beb8-1b5385eca115,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b9c7f5dba7428c3622e7175d534d97dac2fad9105f1b945436a1fe76b258b04,PodSandboxId:3b1b4ed184b39e844a1dfa41522a4f17e03edc8436983937bf12e70408792bff,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729920954601551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xcnkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 843df706-b83a-4b8a-8394-986e2bad5fd5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16590c60ec5e6913dbd8207dcbc54d61193b5e2ef38ad31325113cc8668ba2b3,PodSandboxId:9cd93467ec7935c5c7f5bb5b0cc76e5e173cc137a937de92d62e76bc1c8
aa71b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722729918639286873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bgdmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8221f21f-9e58-42f9-a57b-f77a4fb8bfe5,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65cab711a245f05094c24f47ffadbd0dd68c8b36be75a2f743a939bd8449d24a,PodSandboxId:3009ba7f71cd36a6b7298acfb4a17f7dac4fecf74f519d591de36b36ada6c7e8,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722729918676398028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08673eb4ad68950f579dcdd807d9a62a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b357f3fadd4d1373e09bc14d8e7e13d0429feb6f5728cad882b6ef2779f66405,PodSandboxId:5a4c1535efb9e40a996a2344ef86a494ca97b581b2b2c97b89b8f83c05214aad,Metadata:&ContainerMetadata{Name:k
ube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722729918451026061,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a981fef2d4967395ec86acda349b2578,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34479e6ba02499dd134cef1651b8f14a02f3b4c44bc672b0e311d6a2f3719cd0,PodSandboxId:47a6a439d9e219c0c37d919688e14532ee53588024547ceb103ea68b8313ed1f,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722729918458456290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e23ba150da6a65f8aea36aae8b1ca4f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0597b1facbd1f2fea136efebc47f0bfd03ac5cddc5b778598470b187ab2ad4,PodSandboxId:2644388205a5d120e7d9686754fdec30e7c7f3b32bdad399e5714c3f6346a29c,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722729918375474023,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-302198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7580356846c4e75380dddc63538d5408,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26db26d6-dd78-447e-8f13-3db29ec82434 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5c5bd5d102d83       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   9aaa799f85955       storage-provisioner
	153ed2cdd27fe       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   3b1b4ed184b39       coredns-6f6b679f8f-xcnkt
	7f4e1d7f9f1f4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   f169a6143d182       coredns-6f6b679f8f-kz5wp
	9a26a99db526c       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   6 seconds ago       Running             kube-apiserver            2                   91559e267b67f       kube-apiserver-kubernetes-upgrade-302198
	5ec8fc87acf11       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   6 seconds ago       Running             kube-scheduler            2                   6da5fcc022736       kube-scheduler-kubernetes-upgrade-302198
	5339f3512ff4f       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   6 seconds ago       Running             kube-controller-manager   2                   9c77286509d88       kube-controller-manager-kubernetes-upgrade-302198
	2878fa12a3da8       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   11 seconds ago      Running             kube-proxy                2                   058f870c2455f       kube-proxy-bgdmt
	21f41b54cb957       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Exited              storage-provisioner       2                   9aaa799f85955       storage-provisioner
	7ac4849276e16       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 seconds ago      Running             etcd                      2                   70169dd9c6221       etcd-kubernetes-upgrade-302198
	0cc13c3506b08       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   1                   f169a6143d182       coredns-6f6b679f8f-kz5wp
	0b9c7f5dba742       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   1                   3b1b4ed184b39       coredns-6f6b679f8f-xcnkt
	65cab711a245f       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   26 seconds ago      Exited              kube-scheduler            1                   3009ba7f71cd3       kube-scheduler-kubernetes-upgrade-302198
	16590c60ec5e6       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   26 seconds ago      Exited              kube-proxy                1                   9cd93467ec793       kube-proxy-bgdmt
	34479e6ba0249       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   26 seconds ago      Exited              etcd                      1                   47a6a439d9e21       etcd-kubernetes-upgrade-302198
	b357f3fadd4d1       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   26 seconds ago      Exited              kube-controller-manager   1                   5a4c1535efb9e       kube-controller-manager-kubernetes-upgrade-302198
	1d0597b1facbd       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   26 seconds ago      Exited              kube-apiserver            1                   2644388205a5d       kube-apiserver-kubernetes-upgrade-302198
	
	
	==> coredns [0b9c7f5dba7428c3622e7175d534d97dac2fad9105f1b945436a1fe76b258b04] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [0cc13c3506b087693fdc2de55c2c919d2d00858633348d0d25c8fb775bdedea0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [153ed2cdd27fe21e4d4c3b25d2b52fb7f74352cfd8b97c02ba9542ad1ac8657c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [7f4e1d7f9f1f468c130abe7435135c6163e4fe96b42e759eb7a69db059ef0f36] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-302198
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-302198
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:04:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-302198
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:05:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:05:40 +0000   Sun, 04 Aug 2024 00:04:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:05:40 +0000   Sun, 04 Aug 2024 00:04:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:05:40 +0000   Sun, 04 Aug 2024 00:04:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:05:40 +0000   Sun, 04 Aug 2024 00:04:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.45
	  Hostname:    kubernetes-upgrade-302198
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd2ed8575755476c95f4d7e7789ed236
	  System UUID:                cd2ed857-5755-476c-95f4-d7e7789ed236
	  Boot ID:                    34afb8e8-36e5-4514-b34c-a1bad3bcdb4c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-kz5wp                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     42s
	  kube-system                 coredns-6f6b679f8f-xcnkt                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     42s
	  kube-system                 etcd-kubernetes-upgrade-302198                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         45s
	  kube-system                 kube-apiserver-kubernetes-upgrade-302198             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-302198    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kube-proxy-bgdmt                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kube-scheduler-kubernetes-upgrade-302198             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)  kubelet          Node kubernetes-upgrade-302198 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet          Node kubernetes-upgrade-302198 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x7 over 53s)  kubelet          Node kubernetes-upgrade-302198 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           42s                node-controller  Node kubernetes-upgrade-302198 event: Registered Node kubernetes-upgrade-302198 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-302198 event: Registered Node kubernetes-upgrade-302198 in Controller
	
	
	==> dmesg <==
	[Aug 4 00:04] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.070682] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056143] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.184350] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.153215] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[ +20.218252] kauditd_printk_skb: 102 callbacks suppressed
	[ +19.464592] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +5.946131] systemd-fstab-generator[739]: Ignoring "noauto" option for root device
	[  +0.066595] kauditd_printk_skb: 12 callbacks suppressed
	[  +2.343031] systemd-fstab-generator[861]: Ignoring "noauto" option for root device
	[Aug 4 00:05] systemd-fstab-generator[1249]: Ignoring "noauto" option for root device
	[  +0.099230] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.001715] kauditd_printk_skb: 70 callbacks suppressed
	[ +11.750989] systemd-fstab-generator[2190]: Ignoring "noauto" option for root device
	[  +0.088651] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.063029] systemd-fstab-generator[2202]: Ignoring "noauto" option for root device
	[  +0.178422] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[  +0.150088] systemd-fstab-generator[2228]: Ignoring "noauto" option for root device
	[  +1.273820] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +2.938095] systemd-fstab-generator[3750]: Ignoring "noauto" option for root device
	[  +8.129947] kauditd_printk_skb: 300 callbacks suppressed
	[  +7.481411] systemd-fstab-generator[4068]: Ignoring "noauto" option for root device
	[  +0.095893] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.308697] systemd-fstab-generator[4523]: Ignoring "noauto" option for root device
	[  +0.105653] kauditd_printk_skb: 56 callbacks suppressed
	
	
	==> etcd [34479e6ba02499dd134cef1651b8f14a02f3b4c44bc672b0e311d6a2f3719cd0] <==
	{"level":"info","ts":"2024-08-04T00:05:18.993442Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-04T00:05:19.087929Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"702ba6bf9adaab31","local-member-id":"e6ae9fa4dfde6017","commit-index":386}
	{"level":"info","ts":"2024-08-04T00:05:19.088275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ae9fa4dfde6017 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-04T00:05:19.090025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ae9fa4dfde6017 became follower at term 2"}
	{"level":"info","ts":"2024-08-04T00:05:19.090114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft e6ae9fa4dfde6017 [peers: [], term: 2, commit: 386, applied: 0, lastindex: 386, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-04T00:05:19.105088Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-04T00:05:19.150149Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":377}
	{"level":"info","ts":"2024-08-04T00:05:19.157985Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-04T00:05:19.169758Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"e6ae9fa4dfde6017","timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:05:19.169993Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"e6ae9fa4dfde6017"}
	{"level":"info","ts":"2024-08-04T00:05:19.170029Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"e6ae9fa4dfde6017","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-04T00:05:19.170306Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-04T00:05:19.170462Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:05:19.170492Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:05:19.170507Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:05:19.170688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ae9fa4dfde6017 switched to configuration voters=(16622398805150425111)"}
	{"level":"info","ts":"2024-08-04T00:05:19.170732Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"702ba6bf9adaab31","local-member-id":"e6ae9fa4dfde6017","added-peer-id":"e6ae9fa4dfde6017","added-peer-peer-urls":["https://192.168.61.45:2380"]}
	{"level":"info","ts":"2024-08-04T00:05:19.170798Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"702ba6bf9adaab31","local-member-id":"e6ae9fa4dfde6017","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:05:19.170820Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:05:19.202603Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T00:05:19.207615Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T00:05:19.209923Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.45:2380"}
	{"level":"info","ts":"2024-08-04T00:05:19.210056Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.45:2380"}
	{"level":"info","ts":"2024-08-04T00:05:19.211304Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e6ae9fa4dfde6017","initial-advertise-peer-urls":["https://192.168.61.45:2380"],"listen-peer-urls":["https://192.168.61.45:2380"],"advertise-client-urls":["https://192.168.61.45:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.45:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:05:19.211367Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [7ac4849276e16e2831d4e7b672838da4cae7e014bf7bb3018e2f10d95b5a0c96] <==
	{"level":"info","ts":"2024-08-04T00:05:30.232611Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"702ba6bf9adaab31","local-member-id":"e6ae9fa4dfde6017","added-peer-id":"e6ae9fa4dfde6017","added-peer-peer-urls":["https://192.168.61.45:2380"]}
	{"level":"info","ts":"2024-08-04T00:05:30.232689Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"702ba6bf9adaab31","local-member-id":"e6ae9fa4dfde6017","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:05:30.232746Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:05:30.232722Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T00:05:30.235702Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T00:05:30.235911Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.45:2380"}
	{"level":"info","ts":"2024-08-04T00:05:30.235940Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.45:2380"}
	{"level":"info","ts":"2024-08-04T00:05:30.236031Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e6ae9fa4dfde6017","initial-advertise-peer-urls":["https://192.168.61.45:2380"],"listen-peer-urls":["https://192.168.61.45:2380"],"advertise-client-urls":["https://192.168.61.45:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.45:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:05:30.236075Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T00:05:31.622602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ae9fa4dfde6017 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-04T00:05:31.622645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ae9fa4dfde6017 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-04T00:05:31.622679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ae9fa4dfde6017 received MsgPreVoteResp from e6ae9fa4dfde6017 at term 2"}
	{"level":"info","ts":"2024-08-04T00:05:31.622693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ae9fa4dfde6017 became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:05:31.622699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ae9fa4dfde6017 received MsgVoteResp from e6ae9fa4dfde6017 at term 3"}
	{"level":"info","ts":"2024-08-04T00:05:31.622707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e6ae9fa4dfde6017 became leader at term 3"}
	{"level":"info","ts":"2024-08-04T00:05:31.622714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e6ae9fa4dfde6017 elected leader e6ae9fa4dfde6017 at term 3"}
	{"level":"info","ts":"2024-08-04T00:05:31.624464Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e6ae9fa4dfde6017","local-member-attributes":"{Name:kubernetes-upgrade-302198 ClientURLs:[https://192.168.61.45:2379]}","request-path":"/0/members/e6ae9fa4dfde6017/attributes","cluster-id":"702ba6bf9adaab31","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:05:31.624469Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:05:31.624623Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:05:31.624965Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:05:31.624978Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:05:31.625647Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T00:05:31.625683Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T00:05:31.626470Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:05:31.626482Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.45:2379"}
	
	
	==> kernel <==
	 00:05:45 up 1 min,  0 users,  load average: 0.33, 0.16, 0.06
	Linux kubernetes-upgrade-302198 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1d0597b1facbd1f2fea136efebc47f0bfd03ac5cddc5b778598470b187ab2ad4] <==
	
	
	==> kube-apiserver [9a26a99db526cd2be4a425f3ff318827d05a8dc02c0148ded56f025244fb23f7] <==
	I0804 00:05:40.848117       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 00:05:40.848730       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0804 00:05:40.848799       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0804 00:05:40.857318       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 00:05:40.857989       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0804 00:05:40.858092       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 00:05:40.858848       1 aggregator.go:171] initial CRD sync complete...
	I0804 00:05:40.858904       1 autoregister_controller.go:144] Starting autoregister controller
	I0804 00:05:40.858910       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 00:05:40.858916       1 cache.go:39] Caches are synced for autoregister controller
	I0804 00:05:40.865874       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 00:05:40.866054       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 00:05:40.866107       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0804 00:05:40.866183       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0804 00:05:40.870175       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 00:05:40.870259       1 policy_source.go:224] refreshing policies
	I0804 00:05:40.964090       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 00:05:41.427489       1 controller.go:615] quota admission added evaluator for: endpoints
	I0804 00:05:41.751346       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0804 00:05:42.686402       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 00:05:42.702661       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 00:05:42.738172       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 00:05:42.867567       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 00:05:42.876871       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0804 00:05:44.380722       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5339f3512ff4fc5150813e2e49d7af09cb7d4b7dcc823c0f536070319e30ffcf] <==
	I0804 00:05:44.182621       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0804 00:05:44.182652       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0804 00:05:44.182783       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-302198"
	I0804 00:05:44.185404       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0804 00:05:44.187257       1 shared_informer.go:320] Caches are synced for expand
	I0804 00:05:44.196386       1 shared_informer.go:320] Caches are synced for disruption
	I0804 00:05:44.205085       1 shared_informer.go:320] Caches are synced for attach detach
	I0804 00:05:44.211451       1 shared_informer.go:320] Caches are synced for PVC protection
	I0804 00:05:44.212677       1 shared_informer.go:320] Caches are synced for namespace
	I0804 00:05:44.218096       1 shared_informer.go:320] Caches are synced for stateful set
	I0804 00:05:44.221423       1 shared_informer.go:320] Caches are synced for cronjob
	I0804 00:05:44.223886       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0804 00:05:44.226562       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0804 00:05:44.226687       1 shared_informer.go:320] Caches are synced for crt configmap
	I0804 00:05:44.227834       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0804 00:05:44.238039       1 shared_informer.go:320] Caches are synced for persistent volume
	I0804 00:05:44.242384       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0804 00:05:44.278547       1 shared_informer.go:320] Caches are synced for PV protection
	I0804 00:05:44.383037       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:05:44.393743       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:05:44.546554       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="322.611002ms"
	I0804 00:05:44.546767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="111.861µs"
	I0804 00:05:44.827256       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:05:44.827295       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0804 00:05:44.830909       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [b357f3fadd4d1373e09bc14d8e7e13d0429feb6f5728cad882b6ef2779f66405] <==
	
	
	==> kube-proxy [16590c60ec5e6913dbd8207dcbc54d61193b5e2ef38ad31325113cc8668ba2b3] <==
	
	
	==> kube-proxy [2878fa12a3da8a0db76eeaec48bb72d21c75ee658faa5d935fc10f98ede84637] <==
	E0804 00:05:33.977112       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0804 00:05:33.978844       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-302198\": dial tcp 192.168.61.45:8443: connect: connection refused"
	E0804 00:05:35.037766       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-302198\": dial tcp 192.168.61.45:8443: connect: connection refused"
	E0804 00:05:37.217673       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-302198\": dial tcp 192.168.61.45:8443: connect: connection refused"
	I0804 00:05:41.789916       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.45"]
	E0804 00:05:41.789984       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0804 00:05:41.839445       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0804 00:05:41.839535       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:05:41.839574       1 server_linux.go:169] "Using iptables Proxier"
	I0804 00:05:41.846306       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0804 00:05:41.846589       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0804 00:05:41.846629       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:05:41.853457       1 config.go:197] "Starting service config controller"
	I0804 00:05:41.853558       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:05:41.854304       1 config.go:104] "Starting endpoint slice config controller"
	I0804 00:05:41.854397       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:05:41.855442       1 config.go:326] "Starting node config controller"
	I0804 00:05:41.855521       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:05:41.954436       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:05:41.954544       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:05:41.955631       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5ec8fc87acf117245a66bacadf8bcdbc06637a3976692f4e529ef7b1c6d9ae5d] <==
	I0804 00:05:39.256853       1 serving.go:386] Generated self-signed cert in-memory
	W0804 00:05:40.822955       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 00:05:40.823049       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 00:05:40.823066       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 00:05:40.823071       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 00:05:40.875262       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0-rc.0"
	I0804 00:05:40.875296       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:05:40.881392       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:05:40.881511       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:05:40.881556       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:05:40.881572       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0804 00:05:40.982517       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [65cab711a245f05094c24f47ffadbd0dd68c8b36be75a2f743a939bd8449d24a] <==
	
	
	==> kubelet <==
	Aug 04 00:05:38 kubernetes-upgrade-302198 kubelet[4075]: E0804 00:05:38.483854    4075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-302198?timeout=10s\": dial tcp 192.168.61.45:8443: connect: connection refused" interval="800ms"
	Aug 04 00:05:38 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:38.493007    4075 scope.go:117] "RemoveContainer" containerID="65cab711a245f05094c24f47ffadbd0dd68c8b36be75a2f743a939bd8449d24a"
	Aug 04 00:05:38 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:38.495401    4075 scope.go:117] "RemoveContainer" containerID="1d0597b1facbd1f2fea136efebc47f0bfd03ac5cddc5b778598470b187ab2ad4"
	Aug 04 00:05:38 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:38.496979    4075 scope.go:117] "RemoveContainer" containerID="b357f3fadd4d1373e09bc14d8e7e13d0429feb6f5728cad882b6ef2779f66405"
	Aug 04 00:05:38 kubernetes-upgrade-302198 kubelet[4075]: E0804 00:05:38.509266    4075 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.61.45:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-302198.17e85dc045a0ee8b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-302198,UID:kubernetes-upgrade-302198,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-302198,},FirstTimestamp:2024-08-04 00:05:37.852493451 +0000 UTC m=+0.104899782,LastTimestamp:2024-08-04 00:05:37.852493451 +0000 UTC m=+0.104899782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-302
198,}"
	Aug 04 00:05:38 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:38.758931    4075 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-302198"
	Aug 04 00:05:38 kubernetes-upgrade-302198 kubelet[4075]: E0804 00:05:38.759904    4075 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.45:8443: connect: connection refused" node="kubernetes-upgrade-302198"
	Aug 04 00:05:38 kubernetes-upgrade-302198 kubelet[4075]: W0804 00:05:38.924988    4075 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.61.45:8443: connect: connection refused
	Aug 04 00:05:38 kubernetes-upgrade-302198 kubelet[4075]: E0804 00:05:38.925056    4075 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.61.45:8443: connect: connection refused" logger="UnhandledError"
	Aug 04 00:05:39 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:39.561863    4075 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-302198"
	Aug 04 00:05:40 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:40.859437    4075 apiserver.go:52] "Watching apiserver"
	Aug 04 00:05:40 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:40.871189    4075 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 04 00:05:40 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:40.940149    4075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bac02ab0-6e94-4bf6-a823-116bb3092096-tmp\") pod \"storage-provisioner\" (UID: \"bac02ab0-6e94-4bf6-a823-116bb3092096\") " pod="kube-system/storage-provisioner"
	Aug 04 00:05:40 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:40.940380    4075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8221f21f-9e58-42f9-a57b-f77a4fb8bfe5-lib-modules\") pod \"kube-proxy-bgdmt\" (UID: \"8221f21f-9e58-42f9-a57b-f77a4fb8bfe5\") " pod="kube-system/kube-proxy-bgdmt"
	Aug 04 00:05:40 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:40.940563    4075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8221f21f-9e58-42f9-a57b-f77a4fb8bfe5-xtables-lock\") pod \"kube-proxy-bgdmt\" (UID: \"8221f21f-9e58-42f9-a57b-f77a4fb8bfe5\") " pod="kube-system/kube-proxy-bgdmt"
	Aug 04 00:05:40 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:40.987177    4075 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-302198"
	Aug 04 00:05:40 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:40.987448    4075 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-302198"
	Aug 04 00:05:40 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:40.987528    4075 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 04 00:05:40 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:40.989018    4075 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 04 00:05:41 kubernetes-upgrade-302198 kubelet[4075]: E0804 00:05:41.108523    4075 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-302198\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-302198"
	Aug 04 00:05:41 kubernetes-upgrade-302198 kubelet[4075]: E0804 00:05:41.109701    4075 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-302198\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-302198"
	Aug 04 00:05:41 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:41.167478    4075 scope.go:117] "RemoveContainer" containerID="0cc13c3506b087693fdc2de55c2c919d2d00858633348d0d25c8fb775bdedea0"
	Aug 04 00:05:41 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:41.168019    4075 scope.go:117] "RemoveContainer" containerID="0b9c7f5dba7428c3622e7175d534d97dac2fad9105f1b945436a1fe76b258b04"
	Aug 04 00:05:41 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:41.169050    4075 scope.go:117] "RemoveContainer" containerID="21f41b54cb95766a3a230bb35dff8c84f9283d36589ccb997565f9631690d653"
	Aug 04 00:05:43 kubernetes-upgrade-302198 kubelet[4075]: I0804 00:05:43.370539    4075 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [21f41b54cb95766a3a230bb35dff8c84f9283d36589ccb997565f9631690d653] <==
	I0804 00:05:33.886062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0804 00:05:33.887502       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [5c5bd5d102d837156680fbad8a2d48c33bc5c0e6098b7ead00ca72d7d72af30f] <==
	I0804 00:05:41.390953       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0804 00:05:41.413656       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0804 00:05:41.413784       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0804 00:05:41.434551       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0804 00:05:41.434724       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-302198_be931d71-f68e-4abd-925d-0f8c89711182!
	I0804 00:05:41.436522       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f25a5aff-ab79-492b-a6d1-d49e5749dfc0", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-302198_be931d71-f68e-4abd-925d-0f8c89711182 became leader
	I0804 00:05:41.535687       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-302198_be931d71-f68e-4abd-925d-0f8c89711182!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:05:44.406405   61893 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19364-9607/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-302198 -n kubernetes-upgrade-302198
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-302198 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-302198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-302198
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-302198: (1.134644289s)
--- FAIL: TestKubernetesUpgrade (444.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (63.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-908631 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0804 00:00:41.058559   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-908631 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.748094277s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-908631] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-908631" primary control-plane node in "pause-908631" cluster
	* Updating the running kvm2 "pause-908631" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-908631" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:00:23.539044   54831 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:00:23.539169   54831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:00:23.539178   54831 out.go:304] Setting ErrFile to fd 2...
	I0804 00:00:23.539187   54831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:00:23.539409   54831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:00:23.539940   54831 out.go:298] Setting JSON to false
	I0804 00:00:23.540898   54831 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6167,"bootTime":1722723456,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:00:23.540952   54831 start.go:139] virtualization: kvm guest
	I0804 00:00:23.543292   54831 out.go:177] * [pause-908631] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:00:23.544638   54831 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:00:23.544655   54831 notify.go:220] Checking for updates...
	I0804 00:00:23.547392   54831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:00:23.548768   54831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:00:23.550090   54831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:00:23.551383   54831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:00:23.552715   54831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:00:23.554402   54831 config.go:182] Loaded profile config "pause-908631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:00:23.554857   54831 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:00:23.554927   54831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:00:23.569604   54831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40811
	I0804 00:00:23.570068   54831 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:00:23.570580   54831 main.go:141] libmachine: Using API Version  1
	I0804 00:00:23.570605   54831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:00:23.570948   54831 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:00:23.571178   54831 main.go:141] libmachine: (pause-908631) Calling .DriverName
	I0804 00:00:23.571451   54831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:00:23.571727   54831 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:00:23.571763   54831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:00:23.586208   54831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35243
	I0804 00:00:23.586642   54831 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:00:23.587125   54831 main.go:141] libmachine: Using API Version  1
	I0804 00:00:23.587154   54831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:00:23.587495   54831 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:00:23.587661   54831 main.go:141] libmachine: (pause-908631) Calling .DriverName
	I0804 00:00:23.623531   54831 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:00:23.624907   54831 start.go:297] selected driver: kvm2
	I0804 00:00:23.624926   54831 start.go:901] validating driver "kvm2" against &{Name:pause-908631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-908631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:00:23.625125   54831 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:00:23.625624   54831 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:00:23.625722   54831 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:00:23.640560   54831 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:00:23.641419   54831 cni.go:84] Creating CNI manager for ""
	I0804 00:00:23.641439   54831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:00:23.641503   54831 start.go:340] cluster config:
	{Name:pause-908631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-908631 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:00:23.641632   54831 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:00:23.644437   54831 out.go:177] * Starting "pause-908631" primary control-plane node in "pause-908631" cluster
	I0804 00:00:23.645656   54831 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:00:23.645692   54831 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:00:23.645701   54831 cache.go:56] Caching tarball of preloaded images
	I0804 00:00:23.645814   54831 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:00:23.645829   54831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:00:23.645966   54831 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/pause-908631/config.json ...
	I0804 00:00:23.646196   54831 start.go:360] acquireMachinesLock for pause-908631: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:00:27.925890   54831 start.go:364] duration metric: took 4.279660545s to acquireMachinesLock for "pause-908631"
	I0804 00:00:27.925965   54831 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:00:27.925974   54831 fix.go:54] fixHost starting: 
	I0804 00:00:27.926677   54831 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:00:27.926745   54831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:00:27.947302   54831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40219
	I0804 00:00:27.947802   54831 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:00:27.948468   54831 main.go:141] libmachine: Using API Version  1
	I0804 00:00:27.948492   54831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:00:27.948876   54831 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:00:27.949075   54831 main.go:141] libmachine: (pause-908631) Calling .DriverName
	I0804 00:00:27.949200   54831 main.go:141] libmachine: (pause-908631) Calling .GetState
	I0804 00:00:27.951230   54831 fix.go:112] recreateIfNeeded on pause-908631: state=Running err=<nil>
	W0804 00:00:27.951264   54831 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:00:27.953410   54831 out.go:177] * Updating the running kvm2 "pause-908631" VM ...
	I0804 00:00:27.954671   54831 machine.go:94] provisionDockerMachine start ...
	I0804 00:00:27.954710   54831 main.go:141] libmachine: (pause-908631) Calling .DriverName
	I0804 00:00:27.954917   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHHostname
	I0804 00:00:27.957547   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:27.957998   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:27.958026   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:27.958165   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHPort
	I0804 00:00:27.958340   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:27.958525   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:27.958696   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHUsername
	I0804 00:00:27.958876   54831 main.go:141] libmachine: Using SSH client type: native
	I0804 00:00:27.959109   54831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0804 00:00:27.959122   54831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:00:28.077749   54831 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-908631
	
	I0804 00:00:28.077783   54831 main.go:141] libmachine: (pause-908631) Calling .GetMachineName
	I0804 00:00:28.078041   54831 buildroot.go:166] provisioning hostname "pause-908631"
	I0804 00:00:28.078072   54831 main.go:141] libmachine: (pause-908631) Calling .GetMachineName
	I0804 00:00:28.078260   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHHostname
	I0804 00:00:28.081173   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:28.081715   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:28.081742   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:28.081936   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHPort
	I0804 00:00:28.082100   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:28.082264   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:28.082404   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHUsername
	I0804 00:00:28.082592   54831 main.go:141] libmachine: Using SSH client type: native
	I0804 00:00:28.082798   54831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0804 00:00:28.082819   54831 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-908631 && echo "pause-908631" | sudo tee /etc/hostname
	I0804 00:00:28.216429   54831 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-908631
	
	I0804 00:00:28.216460   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHHostname
	I0804 00:00:28.219430   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:28.219744   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:28.219763   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:28.220002   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHPort
	I0804 00:00:28.220212   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:28.220368   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:28.220522   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHUsername
	I0804 00:00:28.220696   54831 main.go:141] libmachine: Using SSH client type: native
	I0804 00:00:28.220912   54831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0804 00:00:28.220931   54831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-908631' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-908631/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-908631' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:00:28.338557   54831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:00:28.338590   54831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:00:28.338626   54831 buildroot.go:174] setting up certificates
	I0804 00:00:28.338635   54831 provision.go:84] configureAuth start
	I0804 00:00:28.338653   54831 main.go:141] libmachine: (pause-908631) Calling .GetMachineName
	I0804 00:00:28.338994   54831 main.go:141] libmachine: (pause-908631) Calling .GetIP
	I0804 00:00:28.341868   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:28.342306   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:28.342334   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:28.342497   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHHostname
	I0804 00:00:28.345103   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:28.345471   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:28.345510   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:28.345654   54831 provision.go:143] copyHostCerts
	I0804 00:00:28.345718   54831 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:00:28.345731   54831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:00:28.345798   54831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:00:28.345940   54831 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:00:28.345952   54831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:00:28.345985   54831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:00:28.346071   54831 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:00:28.346081   54831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:00:28.346108   54831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:00:28.346168   54831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.pause-908631 san=[127.0.0.1 192.168.50.32 localhost minikube pause-908631]
	I0804 00:00:28.536571   54831 provision.go:177] copyRemoteCerts
	I0804 00:00:28.536626   54831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:00:28.536659   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHHostname
	I0804 00:00:28.539571   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:28.539906   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:28.539937   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:28.540144   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHPort
	I0804 00:00:28.540343   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:28.540506   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHUsername
	I0804 00:00:28.540663   54831 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/pause-908631/id_rsa Username:docker}
	I0804 00:00:28.634870   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:00:28.662964   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0804 00:00:28.693555   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:00:28.722498   54831 provision.go:87] duration metric: took 383.847364ms to configureAuth
	I0804 00:00:28.722530   54831 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:00:28.722800   54831 config.go:182] Loaded profile config "pause-908631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:00:28.722882   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHHostname
	I0804 00:00:28.725915   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:28.726339   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:28.726384   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:28.726554   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHPort
	I0804 00:00:28.726734   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:28.726901   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:28.727030   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHUsername
	I0804 00:00:28.727253   54831 main.go:141] libmachine: Using SSH client type: native
	I0804 00:00:28.727498   54831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0804 00:00:28.727521   54831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:00:34.453296   54831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:00:34.453345   54831 machine.go:97] duration metric: took 6.498636767s to provisionDockerMachine
	I0804 00:00:34.453368   54831 start.go:293] postStartSetup for "pause-908631" (driver="kvm2")
	I0804 00:00:34.453382   54831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:00:34.453411   54831 main.go:141] libmachine: (pause-908631) Calling .DriverName
	I0804 00:00:34.453768   54831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:00:34.453803   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHHostname
	I0804 00:00:34.456950   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:34.457418   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:34.457475   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:34.457590   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHPort
	I0804 00:00:34.457747   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:34.457895   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHUsername
	I0804 00:00:34.458042   54831 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/pause-908631/id_rsa Username:docker}
	I0804 00:00:34.554894   54831 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:00:34.559899   54831 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:00:34.559928   54831 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:00:34.560001   54831 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:00:34.560132   54831 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:00:34.560273   54831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:00:34.573233   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:00:34.608745   54831 start.go:296] duration metric: took 155.360085ms for postStartSetup
	I0804 00:00:34.608798   54831 fix.go:56] duration metric: took 6.682824472s for fixHost
	I0804 00:00:34.608827   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHHostname
	I0804 00:00:34.611886   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:34.612417   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:34.612441   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:34.612624   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHPort
	I0804 00:00:34.612806   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:34.612922   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:34.613100   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHUsername
	I0804 00:00:34.613258   54831 main.go:141] libmachine: Using SSH client type: native
	I0804 00:00:34.613532   54831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I0804 00:00:34.613544   54831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0804 00:00:34.739101   54831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722729634.736852501
	
	I0804 00:00:34.739140   54831 fix.go:216] guest clock: 1722729634.736852501
	I0804 00:00:34.739149   54831 fix.go:229] Guest: 2024-08-04 00:00:34.736852501 +0000 UTC Remote: 2024-08-04 00:00:34.608806395 +0000 UTC m=+11.105237373 (delta=128.046106ms)
	I0804 00:00:34.739175   54831 fix.go:200] guest clock delta is within tolerance: 128.046106ms
	I0804 00:00:34.739181   54831 start.go:83] releasing machines lock for "pause-908631", held for 6.813239578s
	I0804 00:00:34.739203   54831 main.go:141] libmachine: (pause-908631) Calling .DriverName
	I0804 00:00:34.739496   54831 main.go:141] libmachine: (pause-908631) Calling .GetIP
	I0804 00:00:34.742487   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:34.742982   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:34.743013   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:34.743219   54831 main.go:141] libmachine: (pause-908631) Calling .DriverName
	I0804 00:00:34.743832   54831 main.go:141] libmachine: (pause-908631) Calling .DriverName
	I0804 00:00:34.744044   54831 main.go:141] libmachine: (pause-908631) Calling .DriverName
	I0804 00:00:34.744129   54831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:00:34.744167   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHHostname
	I0804 00:00:34.744307   54831 ssh_runner.go:195] Run: cat /version.json
	I0804 00:00:34.744348   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHHostname
	I0804 00:00:34.747912   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:34.748416   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:34.748450   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:34.748712   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:34.748781   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHPort
	I0804 00:00:34.748969   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:34.749077   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:34.749118   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:34.749133   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHUsername
	I0804 00:00:34.749311   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHPort
	I0804 00:00:34.749331   54831 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/pause-908631/id_rsa Username:docker}
	I0804 00:00:34.749495   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHKeyPath
	I0804 00:00:34.749639   54831 main.go:141] libmachine: (pause-908631) Calling .GetSSHUsername
	I0804 00:00:34.749830   54831 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/pause-908631/id_rsa Username:docker}
	I0804 00:00:34.862529   54831 ssh_runner.go:195] Run: systemctl --version
	I0804 00:00:34.873669   54831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:00:35.045716   54831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:00:35.052063   54831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:00:35.052152   54831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:00:35.062181   54831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 00:00:35.062208   54831 start.go:495] detecting cgroup driver to use...
	I0804 00:00:35.062280   54831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:00:35.084669   54831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:00:35.103133   54831 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:00:35.103308   54831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:00:35.125461   54831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:00:35.147335   54831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:00:35.355299   54831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:00:35.513033   54831 docker.go:233] disabling docker service ...
	I0804 00:00:35.513203   54831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:00:35.532386   54831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:00:35.550580   54831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:00:35.715863   54831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:00:35.882341   54831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:00:35.902240   54831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:00:35.931005   54831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:00:35.931073   54831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:00:35.946359   54831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:00:35.946441   54831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:00:35.960655   54831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:00:35.975423   54831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:00:35.992303   54831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:00:36.009088   54831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:00:36.025568   54831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:00:36.044550   54831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:00:36.060561   54831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:00:36.072478   54831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:00:36.084279   54831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:00:36.254150   54831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:00:36.559473   54831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:00:36.559548   54831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:00:36.566312   54831 start.go:563] Will wait 60s for crictl version
	I0804 00:00:36.566380   54831 ssh_runner.go:195] Run: which crictl
	I0804 00:00:36.570941   54831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:00:36.611294   54831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:00:36.611402   54831 ssh_runner.go:195] Run: crio --version
	I0804 00:00:36.647494   54831 ssh_runner.go:195] Run: crio --version
	I0804 00:00:36.686984   54831 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:00:36.688302   54831 main.go:141] libmachine: (pause-908631) Calling .GetIP
	I0804 00:00:36.691563   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:36.692001   54831 main.go:141] libmachine: (pause-908631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:54:be", ip: ""} in network mk-pause-908631: {Iface:virbr2 ExpiryTime:2024-08-04 00:58:59 +0000 UTC Type:0 Mac:52:54:00:22:54:be Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:pause-908631 Clientid:01:52:54:00:22:54:be}
	I0804 00:00:36.692032   54831 main.go:141] libmachine: (pause-908631) DBG | domain pause-908631 has defined IP address 192.168.50.32 and MAC address 52:54:00:22:54:be in network mk-pause-908631
	I0804 00:00:36.692249   54831 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0804 00:00:36.698154   54831 kubeadm.go:883] updating cluster {Name:pause-908631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-908631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:00:36.698315   54831 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:00:36.698377   54831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:00:36.754056   54831 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:00:36.754088   54831 crio.go:433] Images already preloaded, skipping extraction
	I0804 00:00:36.754153   54831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:00:36.806553   54831 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:00:36.806580   54831 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:00:36.806590   54831 kubeadm.go:934] updating node { 192.168.50.32 8443 v1.30.3 crio true true} ...
	I0804 00:00:36.806759   54831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-908631 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-908631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:00:36.806849   54831 ssh_runner.go:195] Run: crio config
	I0804 00:00:36.869757   54831 cni.go:84] Creating CNI manager for ""
	I0804 00:00:36.869780   54831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:00:36.869792   54831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:00:36.869819   54831 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-908631 NodeName:pause-908631 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:00:36.870029   54831 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-908631"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:00:36.870100   54831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:00:36.881334   54831 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:00:36.881441   54831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:00:36.891213   54831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0804 00:00:36.911358   54831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:00:36.932238   54831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0804 00:00:36.951552   54831 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I0804 00:00:36.955882   54831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:00:37.100333   54831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:00:37.115680   54831 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/pause-908631 for IP: 192.168.50.32
	I0804 00:00:37.115706   54831 certs.go:194] generating shared ca certs ...
	I0804 00:00:37.115725   54831 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:00:37.115905   54831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:00:37.115958   54831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:00:37.115970   54831 certs.go:256] generating profile certs ...
	I0804 00:00:37.116073   54831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/pause-908631/client.key
	I0804 00:00:37.116170   54831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/pause-908631/apiserver.key.8da20f36
	I0804 00:00:37.116246   54831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/pause-908631/proxy-client.key
	I0804 00:00:37.116355   54831 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:00:37.116382   54831 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:00:37.116393   54831 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:00:37.116416   54831 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:00:37.116437   54831 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:00:37.116458   54831 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:00:37.116493   54831 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:00:37.117086   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:00:37.196498   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:00:37.314605   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:00:37.468603   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:00:37.576622   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/pause-908631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0804 00:00:37.713397   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/pause-908631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 00:00:37.894818   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/pause-908631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:00:38.119492   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/pause-908631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:00:38.170943   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:00:38.318931   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:00:38.436436   54831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:00:38.497079   54831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:00:38.532097   54831 ssh_runner.go:195] Run: openssl version
	I0804 00:00:38.547516   54831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:00:38.574901   54831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:00:38.585473   54831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:00:38.585552   54831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:00:38.611795   54831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:00:38.624616   54831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:00:38.639240   54831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:00:38.645476   54831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:00:38.645542   54831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:00:38.653364   54831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:00:38.667178   54831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:00:38.683745   54831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:00:38.690473   54831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:00:38.690540   54831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:00:38.699245   54831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:00:38.711937   54831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:00:38.717259   54831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:00:38.724777   54831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:00:38.731676   54831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:00:38.738620   54831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:00:38.751751   54831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:00:38.762089   54831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:00:38.769924   54831 kubeadm.go:392] StartCluster: {Name:pause-908631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-908631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:00:38.770158   54831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:00:38.770252   54831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:00:38.899363   54831 cri.go:89] found id: "e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415"
	I0804 00:00:38.899398   54831 cri.go:89] found id: "58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70"
	I0804 00:00:38.899404   54831 cri.go:89] found id: "a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9"
	I0804 00:00:38.899409   54831 cri.go:89] found id: "81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47"
	I0804 00:00:38.899414   54831 cri.go:89] found id: "a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11"
	I0804 00:00:38.899420   54831 cri.go:89] found id: "46043b103c4dce8874798c01e51e7db90467ccd92135efd1402c2b20e2190ea3"
	I0804 00:00:38.899425   54831 cri.go:89] found id: "d90eacd5648d22505b09200702258fde7b9f17dadd6d88982904d6f814f7db7c"
	I0804 00:00:38.899429   54831 cri.go:89] found id: "2eea495072ad58cdf57c59615162c129ddca1a59f7733272e7941946f0a8ad25"
	I0804 00:00:38.899434   54831 cri.go:89] found id: "f2fa98cbf13e47beda24a3133ccd3eae66308980299ea37529751996d21f9ff7"
	I0804 00:00:38.899444   54831 cri.go:89] found id: "5ab8882892102d0416f8f431244ed369f4cf8ce5872c54e5fb06ff2f380bed66"
	I0804 00:00:38.899450   54831 cri.go:89] found id: "d7d3d872fb4c43aee990e5d32f441925282c2a6f5c70ff0e51afc37cea3eebc4"
	I0804 00:00:38.899476   54831 cri.go:89] found id: ""
	I0804 00:00:38.899544   54831 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-908631 -n pause-908631
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-908631 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-908631 logs -n 25: (1.423917147s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p test-preload-278819         | test-preload-278819       | jenkins | v1.33.1 | 03 Aug 24 23:55 UTC | 03 Aug 24 23:56 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| image   | test-preload-278819 image list | test-preload-278819       | jenkins | v1.33.1 | 03 Aug 24 23:56 UTC | 03 Aug 24 23:56 UTC |
	| delete  | -p test-preload-278819         | test-preload-278819       | jenkins | v1.33.1 | 03 Aug 24 23:56 UTC | 03 Aug 24 23:56 UTC |
	| start   | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:56 UTC | 03 Aug 24 23:57 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC | 03 Aug 24 23:57 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC | 03 Aug 24 23:57 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:58 UTC | 03 Aug 24 23:58 UTC |
	| start   | -p offline-crio-855826         | offline-crio-855826       | jenkins | v1.33.1 | 03 Aug 24 23:58 UTC | 03 Aug 24 23:59 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-908631 --memory=2048  | pause-908631              | jenkins | v1.33.1 | 03 Aug 24 23:58 UTC | 04 Aug 24 00:00 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-860380      | minikube                  | jenkins | v1.26.0 | 03 Aug 24 23:58 UTC | 04 Aug 24 00:00 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302198   | kubernetes-upgrade-302198 | jenkins | v1.33.1 | 03 Aug 24 23:58 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-855826         | offline-crio-855826       | jenkins | v1.33.1 | 03 Aug 24 23:59 UTC | 03 Aug 24 23:59 UTC |
	| start   | -p stopped-upgrade-082329      | minikube                  | jenkins | v1.26.0 | 04 Aug 24 00:00 UTC | 04 Aug 24 00:01 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-908631                | pause-908631              | jenkins | v1.33.1 | 04 Aug 24 00:00 UTC | 04 Aug 24 00:01 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-860380      | running-upgrade-860380    | jenkins | v1.33.1 | 04 Aug 24 00:00 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-082329 stop    | minikube                  | jenkins | v1.26.0 | 04 Aug 24 00:01 UTC | 04 Aug 24 00:01 UTC |
	| start   | -p stopped-upgrade-082329      | stopped-upgrade-082329    | jenkins | v1.33.1 | 04 Aug 24 00:01 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:01:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:01:14.554393   55372 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:01:14.554515   55372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:01:14.554525   55372 out.go:304] Setting ErrFile to fd 2...
	I0804 00:01:14.554530   55372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:01:14.554725   55372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:01:14.555269   55372 out.go:298] Setting JSON to false
	I0804 00:01:14.556201   55372 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6219,"bootTime":1722723456,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:01:14.556262   55372 start.go:139] virtualization: kvm guest
	I0804 00:01:14.558381   55372 out.go:177] * [stopped-upgrade-082329] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:01:14.559766   55372 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:01:14.559808   55372 notify.go:220] Checking for updates...
	I0804 00:01:14.562255   55372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:01:14.564013   55372 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:01:14.565290   55372 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:01:14.566954   55372 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:01:14.568329   55372 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:01:14.569980   55372 config.go:182] Loaded profile config "stopped-upgrade-082329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0804 00:01:14.570427   55372 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:01:14.570483   55372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:01:14.588554   55372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0804 00:01:14.589107   55372 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:01:14.589805   55372 main.go:141] libmachine: Using API Version  1
	I0804 00:01:14.589835   55372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:01:14.590277   55372 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:01:14.590431   55372 main.go:141] libmachine: (stopped-upgrade-082329) Calling .DriverName
	I0804 00:01:14.592052   55372 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0804 00:01:14.593395   55372 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:01:14.593702   55372 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:01:14.593743   55372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:01:14.609348   55372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0804 00:01:14.609870   55372 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:01:14.610404   55372 main.go:141] libmachine: Using API Version  1
	I0804 00:01:14.610424   55372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:01:14.610825   55372 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:01:14.611023   55372 main.go:141] libmachine: (stopped-upgrade-082329) Calling .DriverName
	I0804 00:01:14.654228   55372 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:01:14.655628   55372 start.go:297] selected driver: kvm2
	I0804 00:01:14.655649   55372 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-082329 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-082
329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.97 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0804 00:01:14.655779   55372 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:01:14.656802   55372 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:01:14.656899   55372 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:01:14.675100   55372 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:01:14.675572   55372 cni.go:84] Creating CNI manager for ""
	I0804 00:01:14.675600   55372 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:01:14.675693   55372 start.go:340] cluster config:
	{Name:stopped-upgrade-082329 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-082329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.97 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0804 00:01:14.675864   55372 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:01:14.678050   55372 out.go:177] * Starting "stopped-upgrade-082329" primary control-plane node in "stopped-upgrade-082329" cluster
	I0804 00:01:14.679539   55372 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0804 00:01:14.679594   55372 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0804 00:01:14.679618   55372 cache.go:56] Caching tarball of preloaded images
	I0804 00:01:14.679730   55372 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:01:14.679744   55372 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0804 00:01:14.679892   55372 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/stopped-upgrade-082329/config.json ...
	I0804 00:01:14.680190   55372 start.go:360] acquireMachinesLock for stopped-upgrade-082329: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:01:14.680250   55372 start.go:364] duration metric: took 38.07µs to acquireMachinesLock for "stopped-upgrade-082329"
	I0804 00:01:14.680266   55372 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:01:14.680272   55372 fix.go:54] fixHost starting: 
	I0804 00:01:14.680640   55372 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:01:14.680679   55372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:01:14.697862   55372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I0804 00:01:14.698489   55372 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:01:14.699162   55372 main.go:141] libmachine: Using API Version  1
	I0804 00:01:14.699181   55372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:01:14.699673   55372 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:01:14.699883   55372 main.go:141] libmachine: (stopped-upgrade-082329) Calling .DriverName
	I0804 00:01:14.700046   55372 main.go:141] libmachine: (stopped-upgrade-082329) Calling .GetState
	I0804 00:01:14.701832   55372 fix.go:112] recreateIfNeeded on stopped-upgrade-082329: state=Stopped err=<nil>
	I0804 00:01:14.701861   55372 main.go:141] libmachine: (stopped-upgrade-082329) Calling .DriverName
	W0804 00:01:14.702032   55372 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:01:14.703942   55372 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-082329" ...
	I0804 00:01:14.204737   55045 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:01:14.215392   55045 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:01:14.257739   55045 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:01:14.279421   55045 system_pods.go:59] 7 kube-system pods found
	I0804 00:01:14.279466   55045 system_pods.go:61] "coredns-6d4b75cb6d-cj75k" [2727725c-af7d-45ce-8b41-23793fe82fd5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:01:14.279473   55045 system_pods.go:61] "etcd-running-upgrade-860380" [44fe9887-d898-49db-ba21-79770071b6b0] Running
	I0804 00:01:14.279483   55045 system_pods.go:61] "kube-apiserver-running-upgrade-860380" [0aa6a5ef-91ba-49e0-8bbf-d6c4ed5259c1] Running
	I0804 00:01:14.279487   55045 system_pods.go:61] "kube-controller-manager-running-upgrade-860380" [f3dee082-78bf-4a8d-9ccb-90390cd1f6f7] Running
	I0804 00:01:14.279492   55045 system_pods.go:61] "kube-proxy-g4cft" [4c91e0b9-a034-46ba-afca-47a0d28e029e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0804 00:01:14.279495   55045 system_pods.go:61] "kube-scheduler-running-upgrade-860380" [3876537d-f7b3-4eef-8326-e795a1ce7932] Running
	I0804 00:01:14.279501   55045 system_pods.go:61] "storage-provisioner" [26b0ff07-0fcb-4b10-a722-c8981dbe33bd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0804 00:01:14.279507   55045 system_pods.go:74] duration metric: took 21.748348ms to wait for pod list to return data ...
	I0804 00:01:14.279514   55045 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:01:14.284127   55045 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0804 00:01:14.284185   55045 node_conditions.go:123] node cpu capacity is 2
	I0804 00:01:14.284199   55045 node_conditions.go:105] duration metric: took 4.679677ms to run NodePressure ...
	I0804 00:01:14.284220   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:15.622760   55045 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.338509647s)
	I0804 00:01:15.622821   55045 retry.go:31] will retry after 144.717µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	[addons] Applied essential addon: CoreDNS
	
	stderr:
	error execution phase addon/kube-proxy: unable to update daemonset: Put "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy?timeout=10s": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:15.623971   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:15.783400   55045 retry.go:31] will retry after 108.523µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:15.784577   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:15.950099   55045 retry.go:31] will retry after 331.277µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:15.951243   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:16.185972   55045 retry.go:31] will retry after 408.234µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:16.187123   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:16.327372   55045 retry.go:31] will retry after 318.569µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:16.328547   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:16.598061   55045 retry.go:31] will retry after 401.689µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:16.599215   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:14.586111   54831 pod_ready.go:102] pod "etcd-pause-908631" in "kube-system" namespace has status "Ready":"False"
	I0804 00:01:17.084446   54831 pod_ready.go:102] pod "etcd-pause-908631" in "kube-system" namespace has status "Ready":"False"
	I0804 00:01:14.705555   55372 main.go:141] libmachine: (stopped-upgrade-082329) Calling .Start
	I0804 00:01:14.705748   55372 main.go:141] libmachine: (stopped-upgrade-082329) Ensuring networks are active...
	I0804 00:01:14.706592   55372 main.go:141] libmachine: (stopped-upgrade-082329) Ensuring network default is active
	I0804 00:01:14.706814   55372 main.go:141] libmachine: (stopped-upgrade-082329) Ensuring network mk-stopped-upgrade-082329 is active
	I0804 00:01:14.707372   55372 main.go:141] libmachine: (stopped-upgrade-082329) Getting domain xml...
	I0804 00:01:14.707957   55372 main.go:141] libmachine: (stopped-upgrade-082329) Creating domain...
	I0804 00:01:16.050263   55372 main.go:141] libmachine: (stopped-upgrade-082329) Waiting to get IP...
	I0804 00:01:16.051222   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:16.051682   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:16.051737   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:16.051659   55407 retry.go:31] will retry after 246.916412ms: waiting for machine to come up
	I0804 00:01:16.300484   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:16.301271   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:16.301307   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:16.301219   55407 retry.go:31] will retry after 261.540437ms: waiting for machine to come up
	I0804 00:01:16.565029   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:16.565513   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:16.565543   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:16.565475   55407 retry.go:31] will retry after 362.626193ms: waiting for machine to come up
	I0804 00:01:16.930234   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:16.930965   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:16.930989   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:16.930907   55407 retry.go:31] will retry after 594.874519ms: waiting for machine to come up
	I0804 00:01:17.527687   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:17.528200   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:17.528219   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:17.528160   55407 retry.go:31] will retry after 490.301945ms: waiting for machine to come up
	I0804 00:01:18.020445   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:18.020918   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:18.020949   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:18.020868   55407 retry.go:31] will retry after 704.018662ms: waiting for machine to come up
	I0804 00:01:18.727093   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:18.727633   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:18.727663   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:18.727587   55407 retry.go:31] will retry after 843.568311ms: waiting for machine to come up
	I0804 00:01:19.583518   54831 pod_ready.go:92] pod "etcd-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:19.583544   54831 pod_ready.go:81] duration metric: took 11.006421508s for pod "etcd-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.583555   54831 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.588534   54831 pod_ready.go:92] pod "kube-apiserver-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:19.588561   54831 pod_ready.go:81] duration metric: took 4.99799ms for pod "kube-apiserver-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.588573   54831 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.593693   54831 pod_ready.go:92] pod "kube-controller-manager-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:19.593716   54831 pod_ready.go:81] duration metric: took 5.134745ms for pod "kube-controller-manager-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.593728   54831 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sdch9" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.598907   54831 pod_ready.go:92] pod "kube-proxy-sdch9" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:19.598929   54831 pod_ready.go:81] duration metric: took 5.193479ms for pod "kube-proxy-sdch9" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.598941   54831 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.603645   54831 pod_ready.go:92] pod "kube-scheduler-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:19.603665   54831 pod_ready.go:81] duration metric: took 4.715626ms for pod "kube-scheduler-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.603675   54831 pod_ready.go:38] duration metric: took 12.540625859s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:01:19.603702   54831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:01:19.615511   54831 ops.go:34] apiserver oom_adj: -16
	I0804 00:01:19.615535   54831 kubeadm.go:597] duration metric: took 40.635767898s to restartPrimaryControlPlane
	I0804 00:01:19.615546   54831 kubeadm.go:394] duration metric: took 40.845631281s to StartCluster
	I0804 00:01:19.615565   54831 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:01:19.615659   54831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:01:19.616507   54831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:01:19.616769   54831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:01:19.616886   54831 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:01:19.616988   54831 config.go:182] Loaded profile config "pause-908631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:01:19.618522   54831 out.go:177] * Verifying Kubernetes components...
	I0804 00:01:19.619370   54831 out.go:177] * Enabled addons: 
	I0804 00:01:16.774955   55045 retry.go:31] will retry after 895.807µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:16.776080   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:16.911394   55045 retry.go:31] will retry after 1.608356ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:16.913620   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.010920   55045 retry.go:31] will retry after 3.071636ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.014104   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.135931   55045 retry.go:31] will retry after 2.890173ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.139242   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.299608   55045 retry.go:31] will retry after 4.079976ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.303794   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.420023   55045 retry.go:31] will retry after 10.704236ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.431274   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.537334   55045 retry.go:31] will retry after 12.597943ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.550618   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.636570   55045 retry.go:31] will retry after 25.462854ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.662845   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.722843   55045 retry.go:31] will retry after 24.454523ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.748130   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.805885   55045 retry.go:31] will retry after 51.634666ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.858158   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.926214   55045 retry.go:31] will retry after 63.054027ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.989533   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:18.044484   55045 retry.go:31] will retry after 76.37341ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:18.121790   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:18.189025   55045 retry.go:31] will retry after 191.071574ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:18.380230   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:18.445872   55045 retry.go:31] will retry after 207.346359ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:18.654321   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:18.707196   55045 retry.go:31] will retry after 243.897896ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:18.951684   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:19.042663   55045 retry.go:31] will retry after 552.27404ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:19.595238   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:19.671731   55045 retry.go:31] will retry after 1.080047057s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:20.751943   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:20.806899   55045 retry.go:31] will retry after 1.498355326s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:19.620129   54831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:01:19.620926   54831 addons.go:510] duration metric: took 4.041174ms for enable addons: enabled=[]
	I0804 00:01:19.762760   54831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:01:19.787361   54831 node_ready.go:35] waiting up to 6m0s for node "pause-908631" to be "Ready" ...
	I0804 00:01:19.790917   54831 node_ready.go:49] node "pause-908631" has status "Ready":"True"
	I0804 00:01:19.790938   54831 node_ready.go:38] duration metric: took 3.530309ms for node "pause-908631" to be "Ready" ...
	I0804 00:01:19.790955   54831 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:01:19.984002   54831 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-m6rv2" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:20.381683   54831 pod_ready.go:92] pod "coredns-7db6d8ff4d-m6rv2" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:20.381713   54831 pod_ready.go:81] duration metric: took 397.68569ms for pod "coredns-7db6d8ff4d-m6rv2" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:20.381726   54831 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:20.781791   54831 pod_ready.go:92] pod "etcd-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:20.781833   54831 pod_ready.go:81] duration metric: took 400.098825ms for pod "etcd-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:20.781847   54831 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:21.182014   54831 pod_ready.go:92] pod "kube-apiserver-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:21.182045   54831 pod_ready.go:81] duration metric: took 400.189718ms for pod "kube-apiserver-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:21.182058   54831 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:21.581817   54831 pod_ready.go:92] pod "kube-controller-manager-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:21.581847   54831 pod_ready.go:81] duration metric: took 399.780717ms for pod "kube-controller-manager-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:21.581860   54831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sdch9" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:21.982014   54831 pod_ready.go:92] pod "kube-proxy-sdch9" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:21.982038   54831 pod_ready.go:81] duration metric: took 400.170308ms for pod "kube-proxy-sdch9" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:21.982050   54831 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:22.381910   54831 pod_ready.go:92] pod "kube-scheduler-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:22.381938   54831 pod_ready.go:81] duration metric: took 399.879716ms for pod "kube-scheduler-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:22.381948   54831 pod_ready.go:38] duration metric: took 2.590979677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:01:22.381967   54831 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:01:22.382027   54831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:01:22.399279   54831 api_server.go:72] duration metric: took 2.782468521s to wait for apiserver process to appear ...
	I0804 00:01:22.399304   54831 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:01:22.399326   54831 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0804 00:01:22.404605   54831 api_server.go:279] https://192.168.50.32:8443/healthz returned 200:
	ok
	I0804 00:01:22.405883   54831 api_server.go:141] control plane version: v1.30.3
	I0804 00:01:22.405910   54831 api_server.go:131] duration metric: took 6.598227ms to wait for apiserver health ...
	I0804 00:01:22.405920   54831 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:01:22.585013   54831 system_pods.go:59] 6 kube-system pods found
	I0804 00:01:22.585045   54831 system_pods.go:61] "coredns-7db6d8ff4d-m6rv2" [19bde3ef-d3d3-48bf-a30b-a59535e9d71d] Running
	I0804 00:01:22.585052   54831 system_pods.go:61] "etcd-pause-908631" [b3c2a959-dabc-42e5-9c77-506bb4e37cde] Running
	I0804 00:01:22.585056   54831 system_pods.go:61] "kube-apiserver-pause-908631" [e3a230d8-ca26-497c-9782-76490394e031] Running
	I0804 00:01:22.585061   54831 system_pods.go:61] "kube-controller-manager-pause-908631" [1186de38-5ec6-4963-99fe-99f76a690f54] Running
	I0804 00:01:22.585066   54831 system_pods.go:61] "kube-proxy-sdch9" [9713215c-dca4-47f8-97c1-b0fa2bf7735e] Running
	I0804 00:01:22.585075   54831 system_pods.go:61] "kube-scheduler-pause-908631" [6fd180c8-d4e9-444c-89ec-ff15a13a0cbd] Running
	I0804 00:01:22.585082   54831 system_pods.go:74] duration metric: took 179.155014ms to wait for pod list to return data ...
	I0804 00:01:22.585091   54831 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:01:22.781758   54831 default_sa.go:45] found service account: "default"
	I0804 00:01:22.781791   54831 default_sa.go:55] duration metric: took 196.690452ms for default service account to be created ...
	I0804 00:01:22.781803   54831 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:01:22.984771   54831 system_pods.go:86] 6 kube-system pods found
	I0804 00:01:22.984807   54831 system_pods.go:89] "coredns-7db6d8ff4d-m6rv2" [19bde3ef-d3d3-48bf-a30b-a59535e9d71d] Running
	I0804 00:01:22.984815   54831 system_pods.go:89] "etcd-pause-908631" [b3c2a959-dabc-42e5-9c77-506bb4e37cde] Running
	I0804 00:01:22.984822   54831 system_pods.go:89] "kube-apiserver-pause-908631" [e3a230d8-ca26-497c-9782-76490394e031] Running
	I0804 00:01:22.984834   54831 system_pods.go:89] "kube-controller-manager-pause-908631" [1186de38-5ec6-4963-99fe-99f76a690f54] Running
	I0804 00:01:22.984842   54831 system_pods.go:89] "kube-proxy-sdch9" [9713215c-dca4-47f8-97c1-b0fa2bf7735e] Running
	I0804 00:01:22.984848   54831 system_pods.go:89] "kube-scheduler-pause-908631" [6fd180c8-d4e9-444c-89ec-ff15a13a0cbd] Running
	I0804 00:01:22.984856   54831 system_pods.go:126] duration metric: took 203.046318ms to wait for k8s-apps to be running ...
	I0804 00:01:22.984869   54831 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:01:22.984921   54831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:01:23.003257   54831 system_svc.go:56] duration metric: took 18.378464ms WaitForService to wait for kubelet
	I0804 00:01:23.003290   54831 kubeadm.go:582] duration metric: took 3.386484448s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:01:23.003312   54831 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:01:23.182624   54831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:01:23.182661   54831 node_conditions.go:123] node cpu capacity is 2
	I0804 00:01:23.182676   54831 node_conditions.go:105] duration metric: took 179.358194ms to run NodePressure ...
	I0804 00:01:23.182698   54831 start.go:241] waiting for startup goroutines ...
	I0804 00:01:23.182709   54831 start.go:246] waiting for cluster config update ...
	I0804 00:01:23.182723   54831 start.go:255] writing updated cluster config ...
	I0804 00:01:23.183032   54831 ssh_runner.go:195] Run: rm -f paused
	I0804 00:01:23.233711   54831 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:01:23.235900   54831 out.go:177] * Done! kubectl is now configured to use "pause-908631" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.897696628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729683897668591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1eae3be-8db0-49bd-8f92-3c683f24ac0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.898295268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3853933-75ad-4468-8638-cf378a865d01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.898379746Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3853933-75ad-4468-8638-cf378a865d01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.898739333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae91c6e7ce409ccedacf420a0b97114921eae1817239f4928856d7cfd774b09c,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729666379701377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d7219f8e908cf502a986af6fc0db19e799b8fc9401003a28ff961ea179dda6,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729661599761110,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2a3f0590146cbe0e9206a192e8087792a2eb05d17d6c2d1211a54c5c3dbc08,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729661583788545,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-908631,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:684f34a2c562666264f82283e92f93d0deef0c8df4412620054c0d4ba8934e84,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729661559841829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca
40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23fb83bcaadca15284a1a25074d9f2405c81ff66b46b4aa95280ea46bdffc36e,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729661574000247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,}
,Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb06696e8434cfb5575a4f3728363b0e78c8a60e26c1b4adb3e74221d69764a,PodSandboxId:5f6b9bb54c52352985164635eb2db8c5b28f6886f2498ed3c0d15132ea0901de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729637781191990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io
.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729638344398158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042
c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729637842443388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,},Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722729637747826399,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722729637704361742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729637597769751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90eacd5648d22505b09200702258fde7b9f17dadd6d88982904d6f814f7db7c,PodSandboxId:cde87e8667a176fe9792edaa422d9fd3387bf3a18fe465d35fc59660f79b0216,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722729582932711102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3853933-75ad-4468-8638-cf378a865d01 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.943213439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=523143ba-8afb-480f-b582-7def42ccbffc name=/runtime.v1.RuntimeService/Version
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.943305502Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=523143ba-8afb-480f-b582-7def42ccbffc name=/runtime.v1.RuntimeService/Version
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.944406482Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afd8f8db-bae1-4c78-9530-d605d8195a01 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.944937913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729683944910158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afd8f8db-bae1-4c78-9530-d605d8195a01 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.945929341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ac6db13-f7db-4d26-8955-7bfb78cf2433 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.946004608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ac6db13-f7db-4d26-8955-7bfb78cf2433 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.948343464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae91c6e7ce409ccedacf420a0b97114921eae1817239f4928856d7cfd774b09c,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729666379701377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d7219f8e908cf502a986af6fc0db19e799b8fc9401003a28ff961ea179dda6,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729661599761110,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2a3f0590146cbe0e9206a192e8087792a2eb05d17d6c2d1211a54c5c3dbc08,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729661583788545,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-908631,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:684f34a2c562666264f82283e92f93d0deef0c8df4412620054c0d4ba8934e84,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729661559841829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca
40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23fb83bcaadca15284a1a25074d9f2405c81ff66b46b4aa95280ea46bdffc36e,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729661574000247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,}
,Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb06696e8434cfb5575a4f3728363b0e78c8a60e26c1b4adb3e74221d69764a,PodSandboxId:5f6b9bb54c52352985164635eb2db8c5b28f6886f2498ed3c0d15132ea0901de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729637781191990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io
.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729638344398158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042
c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729637842443388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,},Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722729637747826399,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722729637704361742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729637597769751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90eacd5648d22505b09200702258fde7b9f17dadd6d88982904d6f814f7db7c,PodSandboxId:cde87e8667a176fe9792edaa422d9fd3387bf3a18fe465d35fc59660f79b0216,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722729582932711102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ac6db13-f7db-4d26-8955-7bfb78cf2433 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.995607041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=087f9f50-fe4d-4a8d-9597-c97d39388fa8 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.995701054Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=087f9f50-fe4d-4a8d-9597-c97d39388fa8 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.996852525Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8307fbd6-b647-4d33-b82d-bfd2a4f506a5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.997410338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729683997384156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8307fbd6-b647-4d33-b82d-bfd2a4f506a5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.998076490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63fda313-b2a4-490f-9241-3ee0c9fa3cc9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.998132214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63fda313-b2a4-490f-9241-3ee0c9fa3cc9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:23 pause-908631 crio[2440]: time="2024-08-04 00:01:23.998446966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae91c6e7ce409ccedacf420a0b97114921eae1817239f4928856d7cfd774b09c,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729666379701377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d7219f8e908cf502a986af6fc0db19e799b8fc9401003a28ff961ea179dda6,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729661599761110,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2a3f0590146cbe0e9206a192e8087792a2eb05d17d6c2d1211a54c5c3dbc08,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729661583788545,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-908631,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:684f34a2c562666264f82283e92f93d0deef0c8df4412620054c0d4ba8934e84,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729661559841829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca
40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23fb83bcaadca15284a1a25074d9f2405c81ff66b46b4aa95280ea46bdffc36e,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729661574000247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,}
,Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb06696e8434cfb5575a4f3728363b0e78c8a60e26c1b4adb3e74221d69764a,PodSandboxId:5f6b9bb54c52352985164635eb2db8c5b28f6886f2498ed3c0d15132ea0901de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729637781191990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io
.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729638344398158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042
c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729637842443388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,},Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722729637747826399,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722729637704361742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729637597769751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90eacd5648d22505b09200702258fde7b9f17dadd6d88982904d6f814f7db7c,PodSandboxId:cde87e8667a176fe9792edaa422d9fd3387bf3a18fe465d35fc59660f79b0216,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722729582932711102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63fda313-b2a4-490f-9241-3ee0c9fa3cc9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:24 pause-908631 crio[2440]: time="2024-08-04 00:01:24.039971304Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d240144a-7f1a-46ee-9752-49fc52ec66d6 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:01:24 pause-908631 crio[2440]: time="2024-08-04 00:01:24.040262580Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d240144a-7f1a-46ee-9752-49fc52ec66d6 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:01:24 pause-908631 crio[2440]: time="2024-08-04 00:01:24.041956606Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4628b824-0207-4233-bf10-321815119f75 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:24 pause-908631 crio[2440]: time="2024-08-04 00:01:24.042413170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729684042387079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4628b824-0207-4233-bf10-321815119f75 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:24 pause-908631 crio[2440]: time="2024-08-04 00:01:24.043199115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d27b1456-bc72-4ef7-8299-6a8f24af4893 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:24 pause-908631 crio[2440]: time="2024-08-04 00:01:24.043265124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d27b1456-bc72-4ef7-8299-6a8f24af4893 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:24 pause-908631 crio[2440]: time="2024-08-04 00:01:24.043545797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae91c6e7ce409ccedacf420a0b97114921eae1817239f4928856d7cfd774b09c,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729666379701377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d7219f8e908cf502a986af6fc0db19e799b8fc9401003a28ff961ea179dda6,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729661599761110,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2a3f0590146cbe0e9206a192e8087792a2eb05d17d6c2d1211a54c5c3dbc08,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729661583788545,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-908631,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:684f34a2c562666264f82283e92f93d0deef0c8df4412620054c0d4ba8934e84,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729661559841829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca
40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23fb83bcaadca15284a1a25074d9f2405c81ff66b46b4aa95280ea46bdffc36e,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729661574000247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,}
,Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb06696e8434cfb5575a4f3728363b0e78c8a60e26c1b4adb3e74221d69764a,PodSandboxId:5f6b9bb54c52352985164635eb2db8c5b28f6886f2498ed3c0d15132ea0901de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729637781191990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io
.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729638344398158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042
c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729637842443388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,},Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722729637747826399,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722729637704361742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729637597769751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90eacd5648d22505b09200702258fde7b9f17dadd6d88982904d6f814f7db7c,PodSandboxId:cde87e8667a176fe9792edaa422d9fd3387bf3a18fe465d35fc59660f79b0216,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722729582932711102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d27b1456-bc72-4ef7-8299-6a8f24af4893 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ae91c6e7ce409       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 seconds ago       Running             coredns                   2                   223799266f1f2       coredns-7db6d8ff4d-m6rv2
	a9d7219f8e908       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   22 seconds ago       Running             kube-controller-manager   2                   be59c16ee9e26       kube-controller-manager-pause-908631
	de2a3f0590146       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   22 seconds ago       Running             kube-scheduler            2                   e5fd4884e1a69       kube-scheduler-pause-908631
	23fb83bcaadca       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   22 seconds ago       Running             kube-apiserver            2                   fa02d9d59730d       kube-apiserver-pause-908631
	684f34a2c5626       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   22 seconds ago       Running             etcd                      2                   bf6d7cca9e5b3       etcd-pause-908631
	e418602c155a1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   45 seconds ago       Exited              coredns                   1                   223799266f1f2       coredns-7db6d8ff4d-m6rv2
	58a4650caf704       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   46 seconds ago       Exited              kube-apiserver            1                   fa02d9d59730d       kube-apiserver-pause-908631
	ccb06696e8434       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   46 seconds ago       Running             kube-proxy                1                   5f6b9bb54c523       kube-proxy-sdch9
	a2c74ac35bbee       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   46 seconds ago       Exited              kube-scheduler            1                   e5fd4884e1a69       kube-scheduler-pause-908631
	81d0597919597       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   46 seconds ago       Exited              etcd                      1                   bf6d7cca9e5b3       etcd-pause-908631
	a7e349bedd9ef       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   46 seconds ago       Exited              kube-controller-manager   1                   be59c16ee9e26       kube-controller-manager-pause-908631
	d90eacd5648d2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   About a minute ago   Exited              kube-proxy                0                   cde87e8667a17       kube-proxy-sdch9
	
	
	==> coredns [ae91c6e7ce409ccedacf420a0b97114921eae1817239f4928856d7cfd774b09c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41793 - 4350 "HINFO IN 5747257240772004115.6068251950412122109. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015200388s
	
	
	==> coredns [e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:48830 - 14795 "HINFO IN 3627155112408540510.8888645309219256947. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013593942s
	
	
	==> describe nodes <==
	Name:               pause-908631
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-908631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=pause-908631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_59_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:59:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-908631
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:01:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:01:05 +0000   Sat, 03 Aug 2024 23:59:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:01:05 +0000   Sat, 03 Aug 2024 23:59:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:01:05 +0000   Sat, 03 Aug 2024 23:59:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:01:05 +0000   Sat, 03 Aug 2024 23:59:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.32
	  Hostname:    pause-908631
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 d45c5128bc6b470c9391b7fde7d11daf
	  System UUID:                d45c5128-bc6b-470c-9391-b7fde7d11daf
	  Boot ID:                    f45357b6-996c-4eef-86c2-ddf8dc839719
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-m6rv2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     103s
	  kube-system                 etcd-pause-908631                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         118s
	  kube-system                 kube-apiserver-pause-908631             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-controller-manager-pause-908631    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-proxy-sdch9                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-pause-908631             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  Starting                 42s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node pause-908631 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node pause-908631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node pause-908631 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node pause-908631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node pause-908631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node pause-908631 status is now: NodeHasSufficientPID
	  Normal  NodeReady                117s                 kubelet          Node pause-908631 status is now: NodeReady
	  Normal  RegisteredNode           105s                 node-controller  Node pause-908631 event: Registered Node pause-908631 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node pause-908631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node pause-908631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node pause-908631 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                   node-controller  Node pause-908631 event: Registered Node pause-908631 in Controller
	
	
	==> dmesg <==
	[  +9.491676] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.076411] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078187] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.211012] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.152066] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.316503] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.671192] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.060098] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.824086] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +1.215545] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.349179] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	[  +0.080272] kauditd_printk_skb: 10 callbacks suppressed
	[ +14.811804] systemd-fstab-generator[1482]: Ignoring "noauto" option for root device
	[  +0.082825] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.560452] kauditd_printk_skb: 88 callbacks suppressed
	[Aug 4 00:00] systemd-fstab-generator[2358]: Ignoring "noauto" option for root device
	[  +0.189663] systemd-fstab-generator[2370]: Ignoring "noauto" option for root device
	[  +0.205539] systemd-fstab-generator[2384]: Ignoring "noauto" option for root device
	[  +0.161563] systemd-fstab-generator[2396]: Ignoring "noauto" option for root device
	[  +0.372100] systemd-fstab-generator[2424]: Ignoring "noauto" option for root device
	[  +0.859663] systemd-fstab-generator[2549]: Ignoring "noauto" option for root device
	[  +5.310577] kauditd_printk_skb: 195 callbacks suppressed
	[Aug 4 00:01] systemd-fstab-generator[3387]: Ignoring "noauto" option for root device
	[  +5.610384] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.219837] systemd-fstab-generator[3716]: Ignoring "noauto" option for root device
	
	
	==> etcd [684f34a2c562666264f82283e92f93d0deef0c8df4412620054c0d4ba8934e84] <==
	{"level":"info","ts":"2024-08-04T00:01:02.273395Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:01:02.273438Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:01:02.273785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec switched to configuration voters=(18146372362501279212)"}
	{"level":"info","ts":"2024-08-04T00:01:02.273912Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2484c988a436b7d1","local-member-id":"fbd4dd8524dacdec","added-peer-id":"fbd4dd8524dacdec","added-peer-peer-urls":["https://192.168.50.32:2380"]}
	{"level":"info","ts":"2024-08-04T00:01:02.27416Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2484c988a436b7d1","local-member-id":"fbd4dd8524dacdec","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:01:02.274245Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:01:02.290371Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T00:01:02.290989Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fbd4dd8524dacdec","initial-advertise-peer-urls":["https://192.168.50.32:2380"],"listen-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:01:02.290717Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-08-04T00:01:02.293163Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-08-04T00:01:02.291998Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T00:01:03.786715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-04T00:01:03.786772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:01:03.786799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgPreVoteResp from fbd4dd8524dacdec at term 3"}
	{"level":"info","ts":"2024-08-04T00:01:03.786818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became candidate at term 4"}
	{"level":"info","ts":"2024-08-04T00:01:03.786823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgVoteResp from fbd4dd8524dacdec at term 4"}
	{"level":"info","ts":"2024-08-04T00:01:03.786831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became leader at term 4"}
	{"level":"info","ts":"2024-08-04T00:01:03.786838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbd4dd8524dacdec elected leader fbd4dd8524dacdec at term 4"}
	{"level":"info","ts":"2024-08-04T00:01:03.792572Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fbd4dd8524dacdec","local-member-attributes":"{Name:pause-908631 ClientURLs:[https://192.168.50.32:2379]}","request-path":"/0/members/fbd4dd8524dacdec/attributes","cluster-id":"2484c988a436b7d1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:01:03.792679Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:01:03.793212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:01:03.796239Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:01:03.796275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:01:03.798132Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:01:03.799804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.32:2379"}
	
	
	==> etcd [81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47] <==
	{"level":"info","ts":"2024-08-04T00:00:40.613849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-04T00:00:40.613932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgPreVoteResp from fbd4dd8524dacdec at term 2"}
	{"level":"info","ts":"2024-08-04T00:00:40.613967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:00:40.613993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgVoteResp from fbd4dd8524dacdec at term 3"}
	{"level":"info","ts":"2024-08-04T00:00:40.61402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became leader at term 3"}
	{"level":"info","ts":"2024-08-04T00:00:40.61415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbd4dd8524dacdec elected leader fbd4dd8524dacdec at term 3"}
	{"level":"info","ts":"2024-08-04T00:00:40.619238Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fbd4dd8524dacdec","local-member-attributes":"{Name:pause-908631 ClientURLs:[https://192.168.50.32:2379]}","request-path":"/0/members/fbd4dd8524dacdec/attributes","cluster-id":"2484c988a436b7d1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:00:40.619248Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:00:40.619733Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:00:40.619772Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:00:40.619301Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:00:40.621903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.32:2379"}
	{"level":"info","ts":"2024-08-04T00:00:40.622814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-04T00:00:56.94253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.609975ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14838394416089741953 > lease_revoke:<id:4dec911aaefdacec>","response":"size:28"}
	{"level":"warn","ts":"2024-08-04T00:00:57.086976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.348799ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14838394416089741954 > lease_revoke:<id:4dec911aaefdaca3>","response":"size:28"}
	{"level":"info","ts":"2024-08-04T00:00:59.213088Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-04T00:00:59.213135Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-908631","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"]}
	{"level":"warn","ts":"2024-08-04T00:00:59.213221Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:00:59.21326Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:00:59.21499Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:00:59.215016Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.32:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-04T00:00:59.215104Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fbd4dd8524dacdec","current-leader-member-id":"fbd4dd8524dacdec"}
	{"level":"info","ts":"2024-08-04T00:00:59.218594Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-08-04T00:00:59.218772Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-08-04T00:00:59.2188Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-908631","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"]}
	
	
	==> kernel <==
	 00:01:24 up 2 min,  0 users,  load average: 1.06, 0.54, 0.21
	Linux pause-908631 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [23fb83bcaadca15284a1a25074d9f2405c81ff66b46b4aa95280ea46bdffc36e] <==
	I0804 00:01:05.401254       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 00:01:05.412130       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0804 00:01:05.475928       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 00:01:05.476136       1 policy_source.go:224] refreshing policies
	I0804 00:01:05.476096       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 00:01:05.487306       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0804 00:01:05.487396       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0804 00:01:05.487532       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 00:01:05.487595       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 00:01:05.487734       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 00:01:05.488113       1 aggregator.go:165] initial CRD sync complete...
	I0804 00:01:05.488141       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 00:01:05.488147       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 00:01:05.488152       1 cache.go:39] Caches are synced for autoregister controller
	I0804 00:01:05.487347       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 00:01:05.519706       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 00:01:06.289404       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0804 00:01:06.637557       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.32]
	I0804 00:01:06.638953       1 controller.go:615] quota admission added evaluator for: endpoints
	I0804 00:01:06.644583       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0804 00:01:06.894185       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 00:01:06.906115       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 00:01:06.952847       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 00:01:06.989301       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 00:01:06.997843       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70] <==
	I0804 00:00:49.007433       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0804 00:00:49.007493       1 establishing_controller.go:87] Shutting down EstablishingController
	I0804 00:00:49.007520       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0804 00:00:49.007530       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0804 00:00:49.007595       1 naming_controller.go:302] Shutting down NamingConditionController
	I0804 00:00:49.007621       1 controller.go:167] Shutting down OpenAPI controller
	I0804 00:00:49.008814       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0804 00:00:49.008849       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0804 00:00:49.009822       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 00:00:49.009944       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 00:00:49.010029       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0804 00:00:49.010099       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0804 00:00:49.010112       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0804 00:00:49.010141       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0804 00:00:49.010223       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 00:00:49.010242       1 controller.go:157] Shutting down quota evaluator
	I0804 00:00:49.010270       1 controller.go:176] quota evaluator worker shutdown
	I0804 00:00:49.012999       1 controller.go:176] quota evaluator worker shutdown
	I0804 00:00:49.013134       1 controller.go:176] quota evaluator worker shutdown
	I0804 00:00:49.013161       1 controller.go:176] quota evaluator worker shutdown
	I0804 00:00:49.013218       1 controller.go:176] quota evaluator worker shutdown
	I0804 00:00:49.013340       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0804 00:00:49.013367       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 00:00:49.014144       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0804 00:00:49.013358       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	
	==> kube-controller-manager [a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11] <==
	I0804 00:00:44.041746       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0804 00:00:44.044216       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0804 00:00:44.044540       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0804 00:00:44.044752       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0804 00:00:44.047939       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0804 00:00:44.048020       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0804 00:00:44.048090       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0804 00:00:44.048115       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0804 00:00:44.050255       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0804 00:00:44.051136       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0804 00:00:44.051218       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0804 00:00:44.057947       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0804 00:00:44.059705       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0804 00:00:44.059736       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0804 00:00:44.076735       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0804 00:00:44.077009       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0804 00:00:44.081510       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0804 00:00:44.081661       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0804 00:00:44.081695       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0804 00:00:44.100249       1 shared_informer.go:320] Caches are synced for tokens
	W0804 00:00:54.085806       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.32:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.32:8443: connect: connection refused
	W0804 00:00:54.587597       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.32:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.32:8443: connect: connection refused
	W0804 00:00:55.588441       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.32:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.32:8443: connect: connection refused
	W0804 00:00:57.589594       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.32:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.32:8443: connect: connection refused
	E0804 00:00:57.589773       1 cidr_allocator.go:146] "Failed to list all nodes" err="Get \"https://192.168.50.32:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-ipam-controller"
	
	
	==> kube-controller-manager [a9d7219f8e908cf502a986af6fc0db19e799b8fc9401003a28ff961ea179dda6] <==
	I0804 00:01:18.437448       1 shared_informer.go:320] Caches are synced for attach detach
	I0804 00:01:18.440709       1 shared_informer.go:320] Caches are synced for PVC protection
	I0804 00:01:18.441955       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0804 00:01:18.457222       1 shared_informer.go:320] Caches are synced for deployment
	I0804 00:01:18.468332       1 shared_informer.go:320] Caches are synced for HPA
	I0804 00:01:18.470851       1 shared_informer.go:320] Caches are synced for stateful set
	I0804 00:01:18.473768       1 shared_informer.go:320] Caches are synced for endpoint
	I0804 00:01:18.473931       1 shared_informer.go:320] Caches are synced for job
	I0804 00:01:18.473947       1 shared_informer.go:320] Caches are synced for ephemeral
	I0804 00:01:18.473961       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0804 00:01:18.474475       1 shared_informer.go:320] Caches are synced for persistent volume
	I0804 00:01:18.479627       1 shared_informer.go:320] Caches are synced for taint
	I0804 00:01:18.480407       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0804 00:01:18.481105       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-908631"
	I0804 00:01:18.482666       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0804 00:01:18.490770       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0804 00:01:18.493472       1 shared_informer.go:320] Caches are synced for GC
	I0804 00:01:18.496006       1 shared_informer.go:320] Caches are synced for daemon sets
	I0804 00:01:18.497262       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0804 00:01:18.497455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="90.117µs"
	I0804 00:01:18.511160       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:01:18.516302       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:01:18.943850       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:01:18.952448       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:01:18.952562       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [ccb06696e8434cfb5575a4f3728363b0e78c8a60e26c1b4adb3e74221d69764a] <==
	I0804 00:00:39.902496       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:00:42.058504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.32"]
	I0804 00:00:42.110741       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:00:42.110842       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:00:42.110876       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:00:42.113523       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:00:42.113940       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:00:42.114279       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:00:42.115638       1 config.go:192] "Starting service config controller"
	I0804 00:00:42.115923       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:00:42.116084       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:00:42.116138       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:00:42.118421       1 config.go:319] "Starting node config controller"
	I0804 00:00:42.118462       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:00:42.216305       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:00:42.216404       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:00:42.220764       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d90eacd5648d22505b09200702258fde7b9f17dadd6d88982904d6f814f7db7c] <==
	I0803 23:59:43.623538       1 server_linux.go:69] "Using iptables proxy"
	I0803 23:59:44.047690       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.32"]
	I0803 23:59:44.181984       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 23:59:44.182115       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 23:59:44.182146       1 server_linux.go:165] "Using iptables Proxier"
	I0803 23:59:44.187151       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 23:59:44.187502       1 server.go:872] "Version info" version="v1.30.3"
	I0803 23:59:44.187562       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:59:44.191955       1 config.go:192] "Starting service config controller"
	I0803 23:59:44.192404       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 23:59:44.192495       1 config.go:101] "Starting endpoint slice config controller"
	I0803 23:59:44.192534       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 23:59:44.196120       1 config.go:319] "Starting node config controller"
	I0803 23:59:44.196230       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 23:59:44.293603       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 23:59:44.293729       1 shared_informer.go:320] Caches are synced for service config
	I0803 23:59:44.296407       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9] <==
	I0804 00:00:39.639723       1 serving.go:380] Generated self-signed cert in-memory
	W0804 00:00:41.998461       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 00:00:41.998603       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 00:00:41.998633       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 00:00:41.998708       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 00:00:42.055779       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:00:42.056272       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:00:42.060666       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:00:42.060762       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:00:42.061224       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:00:42.061407       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:00:42.164253       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0804 00:00:59.063837       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [de2a3f0590146cbe0e9206a192e8087792a2eb05d17d6c2d1211a54c5c3dbc08] <==
	I0804 00:01:03.212820       1 serving.go:380] Generated self-signed cert in-memory
	W0804 00:01:05.387756       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 00:01:05.387847       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 00:01:05.387857       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 00:01:05.387863       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 00:01:05.425323       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:01:05.425403       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:01:05.426961       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:01:05.427146       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:01:05.427185       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:01:05.427220       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:01:05.528175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.307635    3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e43b02ecebb11bd0d968246cda47b523-kubeconfig\") pod \"kube-scheduler-pause-908631\" (UID: \"e43b02ecebb11bd0d968246cda47b523\") " pod="kube-system/kube-scheduler-pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.307655    3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/48028332f8f97dca40ee799525fc5447-etcd-certs\") pod \"etcd-pause-908631\" (UID: \"48028332f8f97dca40ee799525fc5447\") " pod="kube-system/etcd-pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.307671    3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3cce1012fc67d8e7aa9fe9c3ac0e186-ca-certs\") pod \"kube-apiserver-pause-908631\" (UID: \"c3cce1012fc67d8e7aa9fe9c3ac0e186\") " pod="kube-system/kube-apiserver-pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.307685    3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62c02dfac9880013304d4fe84d69a808-ca-certs\") pod \"kube-controller-manager-pause-908631\" (UID: \"62c02dfac9880013304d4fe84d69a808\") " pod="kube-system/kube-controller-manager-pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.371834    3394 kubelet_node_status.go:73] "Attempting to register node" node="pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: E0804 00:01:01.372804    3394 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.32:8443: connect: connection refused" node="pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.532320    3394 scope.go:117] "RemoveContainer" containerID="81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.533430    3394 scope.go:117] "RemoveContainer" containerID="58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.534733    3394 scope.go:117] "RemoveContainer" containerID="a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.535794    3394 scope.go:117] "RemoveContainer" containerID="a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: E0804 00:01:01.675016    3394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-908631?timeout=10s\": dial tcp 192.168.50.32:8443: connect: connection refused" interval="800ms"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.775224    3394 kubelet_node_status.go:73] "Attempting to register node" node="pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: E0804 00:01:01.777182    3394 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.32:8443: connect: connection refused" node="pause-908631"
	Aug 04 00:01:02 pause-908631 kubelet[3394]: I0804 00:01:02.579988    3394 kubelet_node_status.go:73] "Attempting to register node" node="pause-908631"
	Aug 04 00:01:05 pause-908631 kubelet[3394]: I0804 00:01:05.552380    3394 kubelet_node_status.go:112] "Node was previously registered" node="pause-908631"
	Aug 04 00:01:05 pause-908631 kubelet[3394]: I0804 00:01:05.552479    3394 kubelet_node_status.go:76] "Successfully registered node" node="pause-908631"
	Aug 04 00:01:05 pause-908631 kubelet[3394]: I0804 00:01:05.554115    3394 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 04 00:01:05 pause-908631 kubelet[3394]: I0804 00:01:05.555000    3394 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.055738    3394 apiserver.go:52] "Watching apiserver"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.059486    3394 topology_manager.go:215] "Topology Admit Handler" podUID="9713215c-dca4-47f8-97c1-b0fa2bf7735e" podNamespace="kube-system" podName="kube-proxy-sdch9"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.059639    3394 topology_manager.go:215] "Topology Admit Handler" podUID="19bde3ef-d3d3-48bf-a30b-a59535e9d71d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-m6rv2"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.082863    3394 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.084199    3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9713215c-dca4-47f8-97c1-b0fa2bf7735e-xtables-lock\") pod \"kube-proxy-sdch9\" (UID: \"9713215c-dca4-47f8-97c1-b0fa2bf7735e\") " pod="kube-system/kube-proxy-sdch9"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.084260    3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9713215c-dca4-47f8-97c1-b0fa2bf7735e-lib-modules\") pod \"kube-proxy-sdch9\" (UID: \"9713215c-dca4-47f8-97c1-b0fa2bf7735e\") " pod="kube-system/kube-proxy-sdch9"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.361159    3394 scope.go:117] "RemoveContainer" containerID="e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-908631 -n pause-908631
helpers_test.go:261: (dbg) Run:  kubectl --context pause-908631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-908631 -n pause-908631
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-908631 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-908631 logs -n 25: (1.432219509s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p test-preload-278819         | test-preload-278819       | jenkins | v1.33.1 | 03 Aug 24 23:55 UTC | 03 Aug 24 23:56 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| image   | test-preload-278819 image list | test-preload-278819       | jenkins | v1.33.1 | 03 Aug 24 23:56 UTC | 03 Aug 24 23:56 UTC |
	| delete  | -p test-preload-278819         | test-preload-278819       | jenkins | v1.33.1 | 03 Aug 24 23:56 UTC | 03 Aug 24 23:56 UTC |
	| start   | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:56 UTC | 03 Aug 24 23:57 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC | 03 Aug 24 23:57 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:57 UTC | 03 Aug 24 23:57 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-144411       | scheduled-stop-144411     | jenkins | v1.33.1 | 03 Aug 24 23:58 UTC | 03 Aug 24 23:58 UTC |
	| start   | -p offline-crio-855826         | offline-crio-855826       | jenkins | v1.33.1 | 03 Aug 24 23:58 UTC | 03 Aug 24 23:59 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-908631 --memory=2048  | pause-908631              | jenkins | v1.33.1 | 03 Aug 24 23:58 UTC | 04 Aug 24 00:00 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-860380      | minikube                  | jenkins | v1.26.0 | 03 Aug 24 23:58 UTC | 04 Aug 24 00:00 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-302198   | kubernetes-upgrade-302198 | jenkins | v1.33.1 | 03 Aug 24 23:58 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-855826         | offline-crio-855826       | jenkins | v1.33.1 | 03 Aug 24 23:59 UTC | 03 Aug 24 23:59 UTC |
	| start   | -p stopped-upgrade-082329      | minikube                  | jenkins | v1.26.0 | 04 Aug 24 00:00 UTC | 04 Aug 24 00:01 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-908631                | pause-908631              | jenkins | v1.33.1 | 04 Aug 24 00:00 UTC | 04 Aug 24 00:01 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-860380      | running-upgrade-860380    | jenkins | v1.33.1 | 04 Aug 24 00:00 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-082329 stop    | minikube                  | jenkins | v1.26.0 | 04 Aug 24 00:01 UTC | 04 Aug 24 00:01 UTC |
	| start   | -p stopped-upgrade-082329      | stopped-upgrade-082329    | jenkins | v1.33.1 | 04 Aug 24 00:01 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:01:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:01:14.554393   55372 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:01:14.554515   55372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:01:14.554525   55372 out.go:304] Setting ErrFile to fd 2...
	I0804 00:01:14.554530   55372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:01:14.554725   55372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:01:14.555269   55372 out.go:298] Setting JSON to false
	I0804 00:01:14.556201   55372 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6219,"bootTime":1722723456,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:01:14.556262   55372 start.go:139] virtualization: kvm guest
	I0804 00:01:14.558381   55372 out.go:177] * [stopped-upgrade-082329] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:01:14.559766   55372 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:01:14.559808   55372 notify.go:220] Checking for updates...
	I0804 00:01:14.562255   55372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:01:14.564013   55372 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:01:14.565290   55372 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:01:14.566954   55372 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:01:14.568329   55372 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:01:14.569980   55372 config.go:182] Loaded profile config "stopped-upgrade-082329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0804 00:01:14.570427   55372 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:01:14.570483   55372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:01:14.588554   55372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0804 00:01:14.589107   55372 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:01:14.589805   55372 main.go:141] libmachine: Using API Version  1
	I0804 00:01:14.589835   55372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:01:14.590277   55372 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:01:14.590431   55372 main.go:141] libmachine: (stopped-upgrade-082329) Calling .DriverName
	I0804 00:01:14.592052   55372 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0804 00:01:14.593395   55372 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:01:14.593702   55372 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:01:14.593743   55372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:01:14.609348   55372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0804 00:01:14.609870   55372 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:01:14.610404   55372 main.go:141] libmachine: Using API Version  1
	I0804 00:01:14.610424   55372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:01:14.610825   55372 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:01:14.611023   55372 main.go:141] libmachine: (stopped-upgrade-082329) Calling .DriverName
	I0804 00:01:14.654228   55372 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:01:14.655628   55372 start.go:297] selected driver: kvm2
	I0804 00:01:14.655649   55372 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-082329 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-082
329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.97 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0804 00:01:14.655779   55372 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:01:14.656802   55372 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:01:14.656899   55372 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:01:14.675100   55372 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:01:14.675572   55372 cni.go:84] Creating CNI manager for ""
	I0804 00:01:14.675600   55372 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:01:14.675693   55372 start.go:340] cluster config:
	{Name:stopped-upgrade-082329 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-082329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.97 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0804 00:01:14.675864   55372 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:01:14.678050   55372 out.go:177] * Starting "stopped-upgrade-082329" primary control-plane node in "stopped-upgrade-082329" cluster
	I0804 00:01:14.679539   55372 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0804 00:01:14.679594   55372 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0804 00:01:14.679618   55372 cache.go:56] Caching tarball of preloaded images
	I0804 00:01:14.679730   55372 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:01:14.679744   55372 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0804 00:01:14.679892   55372 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/stopped-upgrade-082329/config.json ...
	I0804 00:01:14.680190   55372 start.go:360] acquireMachinesLock for stopped-upgrade-082329: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:01:14.680250   55372 start.go:364] duration metric: took 38.07µs to acquireMachinesLock for "stopped-upgrade-082329"
	I0804 00:01:14.680266   55372 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:01:14.680272   55372 fix.go:54] fixHost starting: 
	I0804 00:01:14.680640   55372 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:01:14.680679   55372 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:01:14.697862   55372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I0804 00:01:14.698489   55372 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:01:14.699162   55372 main.go:141] libmachine: Using API Version  1
	I0804 00:01:14.699181   55372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:01:14.699673   55372 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:01:14.699883   55372 main.go:141] libmachine: (stopped-upgrade-082329) Calling .DriverName
	I0804 00:01:14.700046   55372 main.go:141] libmachine: (stopped-upgrade-082329) Calling .GetState
	I0804 00:01:14.701832   55372 fix.go:112] recreateIfNeeded on stopped-upgrade-082329: state=Stopped err=<nil>
	I0804 00:01:14.701861   55372 main.go:141] libmachine: (stopped-upgrade-082329) Calling .DriverName
	W0804 00:01:14.702032   55372 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:01:14.703942   55372 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-082329" ...
	I0804 00:01:14.204737   55045 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:01:14.215392   55045 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:01:14.257739   55045 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:01:14.279421   55045 system_pods.go:59] 7 kube-system pods found
	I0804 00:01:14.279466   55045 system_pods.go:61] "coredns-6d4b75cb6d-cj75k" [2727725c-af7d-45ce-8b41-23793fe82fd5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:01:14.279473   55045 system_pods.go:61] "etcd-running-upgrade-860380" [44fe9887-d898-49db-ba21-79770071b6b0] Running
	I0804 00:01:14.279483   55045 system_pods.go:61] "kube-apiserver-running-upgrade-860380" [0aa6a5ef-91ba-49e0-8bbf-d6c4ed5259c1] Running
	I0804 00:01:14.279487   55045 system_pods.go:61] "kube-controller-manager-running-upgrade-860380" [f3dee082-78bf-4a8d-9ccb-90390cd1f6f7] Running
	I0804 00:01:14.279492   55045 system_pods.go:61] "kube-proxy-g4cft" [4c91e0b9-a034-46ba-afca-47a0d28e029e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0804 00:01:14.279495   55045 system_pods.go:61] "kube-scheduler-running-upgrade-860380" [3876537d-f7b3-4eef-8326-e795a1ce7932] Running
	I0804 00:01:14.279501   55045 system_pods.go:61] "storage-provisioner" [26b0ff07-0fcb-4b10-a722-c8981dbe33bd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0804 00:01:14.279507   55045 system_pods.go:74] duration metric: took 21.748348ms to wait for pod list to return data ...
	I0804 00:01:14.279514   55045 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:01:14.284127   55045 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0804 00:01:14.284185   55045 node_conditions.go:123] node cpu capacity is 2
	I0804 00:01:14.284199   55045 node_conditions.go:105] duration metric: took 4.679677ms to run NodePressure ...
	I0804 00:01:14.284220   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:15.622760   55045 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.338509647s)
	I0804 00:01:15.622821   55045 retry.go:31] will retry after 144.717µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	[addons] Applied essential addon: CoreDNS
	
	stderr:
	error execution phase addon/kube-proxy: unable to update daemonset: Put "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy?timeout=10s": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:15.623971   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:15.783400   55045 retry.go:31] will retry after 108.523µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:15.784577   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:15.950099   55045 retry.go:31] will retry after 331.277µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:15.951243   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:16.185972   55045 retry.go:31] will retry after 408.234µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:16.187123   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:16.327372   55045 retry.go:31] will retry after 318.569µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:16.328547   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:16.598061   55045 retry.go:31] will retry after 401.689µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:16.599215   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:14.586111   54831 pod_ready.go:102] pod "etcd-pause-908631" in "kube-system" namespace has status "Ready":"False"
	I0804 00:01:17.084446   54831 pod_ready.go:102] pod "etcd-pause-908631" in "kube-system" namespace has status "Ready":"False"
	I0804 00:01:14.705555   55372 main.go:141] libmachine: (stopped-upgrade-082329) Calling .Start
	I0804 00:01:14.705748   55372 main.go:141] libmachine: (stopped-upgrade-082329) Ensuring networks are active...
	I0804 00:01:14.706592   55372 main.go:141] libmachine: (stopped-upgrade-082329) Ensuring network default is active
	I0804 00:01:14.706814   55372 main.go:141] libmachine: (stopped-upgrade-082329) Ensuring network mk-stopped-upgrade-082329 is active
	I0804 00:01:14.707372   55372 main.go:141] libmachine: (stopped-upgrade-082329) Getting domain xml...
	I0804 00:01:14.707957   55372 main.go:141] libmachine: (stopped-upgrade-082329) Creating domain...
	I0804 00:01:16.050263   55372 main.go:141] libmachine: (stopped-upgrade-082329) Waiting to get IP...
	I0804 00:01:16.051222   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:16.051682   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:16.051737   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:16.051659   55407 retry.go:31] will retry after 246.916412ms: waiting for machine to come up
	I0804 00:01:16.300484   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:16.301271   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:16.301307   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:16.301219   55407 retry.go:31] will retry after 261.540437ms: waiting for machine to come up
	I0804 00:01:16.565029   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:16.565513   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:16.565543   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:16.565475   55407 retry.go:31] will retry after 362.626193ms: waiting for machine to come up
	I0804 00:01:16.930234   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:16.930965   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:16.930989   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:16.930907   55407 retry.go:31] will retry after 594.874519ms: waiting for machine to come up
	I0804 00:01:17.527687   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:17.528200   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:17.528219   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:17.528160   55407 retry.go:31] will retry after 490.301945ms: waiting for machine to come up
	I0804 00:01:18.020445   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:18.020918   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:18.020949   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:18.020868   55407 retry.go:31] will retry after 704.018662ms: waiting for machine to come up
	I0804 00:01:18.727093   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:18.727633   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:18.727663   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:18.727587   55407 retry.go:31] will retry after 843.568311ms: waiting for machine to come up
	I0804 00:01:19.583518   54831 pod_ready.go:92] pod "etcd-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:19.583544   54831 pod_ready.go:81] duration metric: took 11.006421508s for pod "etcd-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.583555   54831 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.588534   54831 pod_ready.go:92] pod "kube-apiserver-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:19.588561   54831 pod_ready.go:81] duration metric: took 4.99799ms for pod "kube-apiserver-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.588573   54831 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.593693   54831 pod_ready.go:92] pod "kube-controller-manager-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:19.593716   54831 pod_ready.go:81] duration metric: took 5.134745ms for pod "kube-controller-manager-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.593728   54831 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sdch9" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.598907   54831 pod_ready.go:92] pod "kube-proxy-sdch9" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:19.598929   54831 pod_ready.go:81] duration metric: took 5.193479ms for pod "kube-proxy-sdch9" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.598941   54831 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.603645   54831 pod_ready.go:92] pod "kube-scheduler-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:19.603665   54831 pod_ready.go:81] duration metric: took 4.715626ms for pod "kube-scheduler-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:19.603675   54831 pod_ready.go:38] duration metric: took 12.540625859s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:01:19.603702   54831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:01:19.615511   54831 ops.go:34] apiserver oom_adj: -16
	I0804 00:01:19.615535   54831 kubeadm.go:597] duration metric: took 40.635767898s to restartPrimaryControlPlane
	I0804 00:01:19.615546   54831 kubeadm.go:394] duration metric: took 40.845631281s to StartCluster
	I0804 00:01:19.615565   54831 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:01:19.615659   54831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:01:19.616507   54831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:01:19.616769   54831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:01:19.616886   54831 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:01:19.616988   54831 config.go:182] Loaded profile config "pause-908631": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:01:19.618522   54831 out.go:177] * Verifying Kubernetes components...
	I0804 00:01:19.619370   54831 out.go:177] * Enabled addons: 
	I0804 00:01:16.774955   55045 retry.go:31] will retry after 895.807µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:16.776080   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:16.911394   55045 retry.go:31] will retry after 1.608356ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:16.913620   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.010920   55045 retry.go:31] will retry after 3.071636ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.014104   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.135931   55045 retry.go:31] will retry after 2.890173ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.139242   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.299608   55045 retry.go:31] will retry after 4.079976ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.303794   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.420023   55045 retry.go:31] will retry after 10.704236ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.431274   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.537334   55045 retry.go:31] will retry after 12.597943ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.550618   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.636570   55045 retry.go:31] will retry after 25.462854ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.662845   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.722843   55045 retry.go:31] will retry after 24.454523ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.748130   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.805885   55045 retry.go:31] will retry after 51.634666ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.858158   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:17.926214   55045 retry.go:31] will retry after 63.054027ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:17.989533   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:18.044484   55045 retry.go:31] will retry after 76.37341ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:18.121790   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:18.189025   55045 retry.go:31] will retry after 191.071574ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:18.380230   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:18.445872   55045 retry.go:31] will retry after 207.346359ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:18.654321   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:18.707196   55045 retry.go:31] will retry after 243.897896ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:18.951684   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:19.042663   55045 retry.go:31] will retry after 552.27404ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:19.595238   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:19.671731   55045 retry.go:31] will retry after 1.080047057s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:20.751943   55045 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:01:20.806899   55045 retry.go:31] will retry after 1.498355326s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%!D(MISSING)kube-dns": dial tcp 192.168.72.238:8443: connect: connection refused
	To see the stack trace of this error execute with --v=5 or higher
	I0804 00:01:19.620129   54831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:01:19.620926   54831 addons.go:510] duration metric: took 4.041174ms for enable addons: enabled=[]
	I0804 00:01:19.762760   54831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:01:19.787361   54831 node_ready.go:35] waiting up to 6m0s for node "pause-908631" to be "Ready" ...
	I0804 00:01:19.790917   54831 node_ready.go:49] node "pause-908631" has status "Ready":"True"
	I0804 00:01:19.790938   54831 node_ready.go:38] duration metric: took 3.530309ms for node "pause-908631" to be "Ready" ...
	I0804 00:01:19.790955   54831 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:01:19.984002   54831 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-m6rv2" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:20.381683   54831 pod_ready.go:92] pod "coredns-7db6d8ff4d-m6rv2" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:20.381713   54831 pod_ready.go:81] duration metric: took 397.68569ms for pod "coredns-7db6d8ff4d-m6rv2" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:20.381726   54831 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:20.781791   54831 pod_ready.go:92] pod "etcd-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:20.781833   54831 pod_ready.go:81] duration metric: took 400.098825ms for pod "etcd-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:20.781847   54831 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:21.182014   54831 pod_ready.go:92] pod "kube-apiserver-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:21.182045   54831 pod_ready.go:81] duration metric: took 400.189718ms for pod "kube-apiserver-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:21.182058   54831 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:21.581817   54831 pod_ready.go:92] pod "kube-controller-manager-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:21.581847   54831 pod_ready.go:81] duration metric: took 399.780717ms for pod "kube-controller-manager-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:21.581860   54831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sdch9" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:21.982014   54831 pod_ready.go:92] pod "kube-proxy-sdch9" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:21.982038   54831 pod_ready.go:81] duration metric: took 400.170308ms for pod "kube-proxy-sdch9" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:21.982050   54831 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:22.381910   54831 pod_ready.go:92] pod "kube-scheduler-pause-908631" in "kube-system" namespace has status "Ready":"True"
	I0804 00:01:22.381938   54831 pod_ready.go:81] duration metric: took 399.879716ms for pod "kube-scheduler-pause-908631" in "kube-system" namespace to be "Ready" ...
	I0804 00:01:22.381948   54831 pod_ready.go:38] duration metric: took 2.590979677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:01:22.381967   54831 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:01:22.382027   54831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:01:22.399279   54831 api_server.go:72] duration metric: took 2.782468521s to wait for apiserver process to appear ...
	I0804 00:01:22.399304   54831 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:01:22.399326   54831 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8443/healthz ...
	I0804 00:01:22.404605   54831 api_server.go:279] https://192.168.50.32:8443/healthz returned 200:
	ok
	I0804 00:01:22.405883   54831 api_server.go:141] control plane version: v1.30.3
	I0804 00:01:22.405910   54831 api_server.go:131] duration metric: took 6.598227ms to wait for apiserver health ...
	I0804 00:01:22.405920   54831 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:01:22.585013   54831 system_pods.go:59] 6 kube-system pods found
	I0804 00:01:22.585045   54831 system_pods.go:61] "coredns-7db6d8ff4d-m6rv2" [19bde3ef-d3d3-48bf-a30b-a59535e9d71d] Running
	I0804 00:01:22.585052   54831 system_pods.go:61] "etcd-pause-908631" [b3c2a959-dabc-42e5-9c77-506bb4e37cde] Running
	I0804 00:01:22.585056   54831 system_pods.go:61] "kube-apiserver-pause-908631" [e3a230d8-ca26-497c-9782-76490394e031] Running
	I0804 00:01:22.585061   54831 system_pods.go:61] "kube-controller-manager-pause-908631" [1186de38-5ec6-4963-99fe-99f76a690f54] Running
	I0804 00:01:22.585066   54831 system_pods.go:61] "kube-proxy-sdch9" [9713215c-dca4-47f8-97c1-b0fa2bf7735e] Running
	I0804 00:01:22.585075   54831 system_pods.go:61] "kube-scheduler-pause-908631" [6fd180c8-d4e9-444c-89ec-ff15a13a0cbd] Running
	I0804 00:01:22.585082   54831 system_pods.go:74] duration metric: took 179.155014ms to wait for pod list to return data ...
	I0804 00:01:22.585091   54831 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:01:22.781758   54831 default_sa.go:45] found service account: "default"
	I0804 00:01:22.781791   54831 default_sa.go:55] duration metric: took 196.690452ms for default service account to be created ...
	I0804 00:01:22.781803   54831 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:01:22.984771   54831 system_pods.go:86] 6 kube-system pods found
	I0804 00:01:22.984807   54831 system_pods.go:89] "coredns-7db6d8ff4d-m6rv2" [19bde3ef-d3d3-48bf-a30b-a59535e9d71d] Running
	I0804 00:01:22.984815   54831 system_pods.go:89] "etcd-pause-908631" [b3c2a959-dabc-42e5-9c77-506bb4e37cde] Running
	I0804 00:01:22.984822   54831 system_pods.go:89] "kube-apiserver-pause-908631" [e3a230d8-ca26-497c-9782-76490394e031] Running
	I0804 00:01:22.984834   54831 system_pods.go:89] "kube-controller-manager-pause-908631" [1186de38-5ec6-4963-99fe-99f76a690f54] Running
	I0804 00:01:22.984842   54831 system_pods.go:89] "kube-proxy-sdch9" [9713215c-dca4-47f8-97c1-b0fa2bf7735e] Running
	I0804 00:01:22.984848   54831 system_pods.go:89] "kube-scheduler-pause-908631" [6fd180c8-d4e9-444c-89ec-ff15a13a0cbd] Running
	I0804 00:01:22.984856   54831 system_pods.go:126] duration metric: took 203.046318ms to wait for k8s-apps to be running ...
	I0804 00:01:22.984869   54831 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:01:22.984921   54831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:01:23.003257   54831 system_svc.go:56] duration metric: took 18.378464ms WaitForService to wait for kubelet
	I0804 00:01:23.003290   54831 kubeadm.go:582] duration metric: took 3.386484448s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:01:23.003312   54831 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:01:23.182624   54831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:01:23.182661   54831 node_conditions.go:123] node cpu capacity is 2
	I0804 00:01:23.182676   54831 node_conditions.go:105] duration metric: took 179.358194ms to run NodePressure ...
	I0804 00:01:23.182698   54831 start.go:241] waiting for startup goroutines ...
	I0804 00:01:23.182709   54831 start.go:246] waiting for cluster config update ...
	I0804 00:01:23.182723   54831 start.go:255] writing updated cluster config ...
	I0804 00:01:23.183032   54831 ssh_runner.go:195] Run: rm -f paused
	I0804 00:01:23.233711   54831 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:01:23.235900   54831 out.go:177] * Done! kubectl is now configured to use "pause-908631" cluster and "default" namespace by default
	I0804 00:01:19.573301   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:19.573806   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:19.573832   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:19.573775   55407 retry.go:31] will retry after 960.928931ms: waiting for machine to come up
	I0804 00:01:20.536251   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:20.536793   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:20.536844   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:20.536744   55407 retry.go:31] will retry after 1.815466911s: waiting for machine to come up
	I0804 00:01:22.355087   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:22.355615   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:22.355640   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:22.355560   55407 retry.go:31] will retry after 1.795705398s: waiting for machine to come up
	I0804 00:01:24.153754   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | domain stopped-upgrade-082329 has defined MAC address 52:54:00:74:3e:67 in network mk-stopped-upgrade-082329
	I0804 00:01:24.154344   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | unable to find current IP address of domain stopped-upgrade-082329 in network mk-stopped-upgrade-082329
	I0804 00:01:24.154378   55372 main.go:141] libmachine: (stopped-upgrade-082329) DBG | I0804 00:01:24.154236   55407 retry.go:31] will retry after 2.606484447s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 04 00:01:25 pause-908631 crio[2440]: time="2024-08-04 00:01:25.989245269Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=889ede96-5f6f-409a-b600-be6d9a68762c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:25 pause-908631 crio[2440]: time="2024-08-04 00:01:25.989518671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae91c6e7ce409ccedacf420a0b97114921eae1817239f4928856d7cfd774b09c,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729666379701377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d7219f8e908cf502a986af6fc0db19e799b8fc9401003a28ff961ea179dda6,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729661599761110,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2a3f0590146cbe0e9206a192e8087792a2eb05d17d6c2d1211a54c5c3dbc08,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729661583788545,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-908631,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:684f34a2c562666264f82283e92f93d0deef0c8df4412620054c0d4ba8934e84,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729661559841829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca
40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23fb83bcaadca15284a1a25074d9f2405c81ff66b46b4aa95280ea46bdffc36e,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729661574000247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,}
,Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb06696e8434cfb5575a4f3728363b0e78c8a60e26c1b4adb3e74221d69764a,PodSandboxId:5f6b9bb54c52352985164635eb2db8c5b28f6886f2498ed3c0d15132ea0901de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729637781191990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io
.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729638344398158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042
c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729637842443388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,},Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722729637747826399,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722729637704361742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729637597769751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90eacd5648d22505b09200702258fde7b9f17dadd6d88982904d6f814f7db7c,PodSandboxId:cde87e8667a176fe9792edaa422d9fd3387bf3a18fe465d35fc59660f79b0216,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722729582932711102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=889ede96-5f6f-409a-b600-be6d9a68762c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.038978910Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed2a4742-cc41-42ce-a35b-fa478b613a7c name=/runtime.v1.RuntimeService/Version
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.039105134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed2a4742-cc41-42ce-a35b-fa478b613a7c name=/runtime.v1.RuntimeService/Version
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.040665097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=271cc312-6b75-4d97-8c5e-6fd9e0f61e69 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.041272037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729686041242935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=271cc312-6b75-4d97-8c5e-6fd9e0f61e69 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.041928995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0dfb36e-b6bf-4515-bfef-ff66ec90a160 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.041983407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0dfb36e-b6bf-4515-bfef-ff66ec90a160 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.042267662Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae91c6e7ce409ccedacf420a0b97114921eae1817239f4928856d7cfd774b09c,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729666379701377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d7219f8e908cf502a986af6fc0db19e799b8fc9401003a28ff961ea179dda6,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729661599761110,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2a3f0590146cbe0e9206a192e8087792a2eb05d17d6c2d1211a54c5c3dbc08,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729661583788545,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-908631,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:684f34a2c562666264f82283e92f93d0deef0c8df4412620054c0d4ba8934e84,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729661559841829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca
40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23fb83bcaadca15284a1a25074d9f2405c81ff66b46b4aa95280ea46bdffc36e,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729661574000247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,}
,Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb06696e8434cfb5575a4f3728363b0e78c8a60e26c1b4adb3e74221d69764a,PodSandboxId:5f6b9bb54c52352985164635eb2db8c5b28f6886f2498ed3c0d15132ea0901de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729637781191990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io
.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729638344398158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042
c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729637842443388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,},Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722729637747826399,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722729637704361742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729637597769751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90eacd5648d22505b09200702258fde7b9f17dadd6d88982904d6f814f7db7c,PodSandboxId:cde87e8667a176fe9792edaa422d9fd3387bf3a18fe465d35fc59660f79b0216,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722729582932711102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0dfb36e-b6bf-4515-bfef-ff66ec90a160 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.093611044Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88bb2e51-bf00-4708-8c18-db8d1cda1cb6 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.093686786Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88bb2e51-bf00-4708-8c18-db8d1cda1cb6 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.095320239Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4aff6783-c304-42e3-a7d8-e70a5231a5c9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.095660896Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729686095640372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4aff6783-c304-42e3-a7d8-e70a5231a5c9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.096398070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=360eebf5-044d-4904-8be9-677c5239cfd7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.096449535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=360eebf5-044d-4904-8be9-677c5239cfd7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.096684617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae91c6e7ce409ccedacf420a0b97114921eae1817239f4928856d7cfd774b09c,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729666379701377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d7219f8e908cf502a986af6fc0db19e799b8fc9401003a28ff961ea179dda6,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729661599761110,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2a3f0590146cbe0e9206a192e8087792a2eb05d17d6c2d1211a54c5c3dbc08,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729661583788545,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-908631,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:684f34a2c562666264f82283e92f93d0deef0c8df4412620054c0d4ba8934e84,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729661559841829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca
40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23fb83bcaadca15284a1a25074d9f2405c81ff66b46b4aa95280ea46bdffc36e,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729661574000247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,}
,Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb06696e8434cfb5575a4f3728363b0e78c8a60e26c1b4adb3e74221d69764a,PodSandboxId:5f6b9bb54c52352985164635eb2db8c5b28f6886f2498ed3c0d15132ea0901de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729637781191990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io
.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729638344398158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042
c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729637842443388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,},Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722729637747826399,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722729637704361742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729637597769751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90eacd5648d22505b09200702258fde7b9f17dadd6d88982904d6f814f7db7c,PodSandboxId:cde87e8667a176fe9792edaa422d9fd3387bf3a18fe465d35fc59660f79b0216,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722729582932711102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=360eebf5-044d-4904-8be9-677c5239cfd7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.144734692Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=78fdd4f5-8981-4361-a4cf-71b377ae706c name=/runtime.v1.RuntimeService/Status
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.144812796Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=78fdd4f5-8981-4361-a4cf-71b377ae706c name=/runtime.v1.RuntimeService/Status
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.151007682Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ffdd780-f64f-4ebf-b71d-0ab4b57fe031 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.151142924Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ffdd780-f64f-4ebf-b71d-0ab4b57fe031 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.152435383Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9cc21fb8-7866-48fd-a7a4-6e8a5a2e5853 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.152797800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729686152776522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9cc21fb8-7866-48fd-a7a4-6e8a5a2e5853 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.153300013Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01031b1a-af52-43da-ab71-8892ec3e8c7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.153352118Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01031b1a-af52-43da-ab71-8892ec3e8c7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:01:26 pause-908631 crio[2440]: time="2024-08-04 00:01:26.153630479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae91c6e7ce409ccedacf420a0b97114921eae1817239f4928856d7cfd774b09c,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729666379701377,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d7219f8e908cf502a986af6fc0db19e799b8fc9401003a28ff961ea179dda6,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729661599761110,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2a3f0590146cbe0e9206a192e8087792a2eb05d17d6c2d1211a54c5c3dbc08,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729661583788545,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-908631,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:684f34a2c562666264f82283e92f93d0deef0c8df4412620054c0d4ba8934e84,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729661559841829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca
40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23fb83bcaadca15284a1a25074d9f2405c81ff66b46b4aa95280ea46bdffc36e,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729661574000247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,}
,Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccb06696e8434cfb5575a4f3728363b0e78c8a60e26c1b4adb3e74221d69764a,PodSandboxId:5f6b9bb54c52352985164635eb2db8c5b28f6886f2498ed3c0d15132ea0901de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729637781191990,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io
.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415,PodSandboxId:223799266f1f281b5ee9c5fd39f956e23320147f14fc6ee6b703e1177af88d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722729638344398158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m6rv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19bde3ef-d3d3-48bf-a30b-a59535e9d71d,},Annotations:map[string]string{io.kubernetes.container.hash: e042
c4f8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70,PodSandboxId:fa02d9d59730d1570dbc1920e2646768684eff1ad54a129b92ca20aa2b7dde75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729637842443388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cce1012fc67d8e7aa9fe9c3ac0e186,},Annotations:map[string]string{io.kubernetes.container.hash: 69d9fd6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9,PodSandboxId:e5fd4884e1a69767e72303785507b430443c0ccf53d9e0121a1b412000cc98ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722729637747826399,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-pause-908631,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e43b02ecebb11bd0d968246cda47b523,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47,PodSandboxId:bf6d7cca9e5b36875691eaa7b85ead0612800bff775006c0832a42869c13a14c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722729637704361742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-908631,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 48028332f8f97dca40ee799525fc5447,},Annotations:map[string]string{io.kubernetes.container.hash: 91bc3bea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11,PodSandboxId:be59c16ee9e264d30b330179e72d63b5d86c13ba1560b3dcb3241a2b568e7a4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729637597769751,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-908631,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 62c02dfac9880013304d4fe84d69a808,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90eacd5648d22505b09200702258fde7b9f17dadd6d88982904d6f814f7db7c,PodSandboxId:cde87e8667a176fe9792edaa422d9fd3387bf3a18fe465d35fc59660f79b0216,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722729582932711102,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sdch9,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9713215c-dca4-47f8-97c1-b0fa2bf7735e,},Annotations:map[string]string{io.kubernetes.container.hash: e058060f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01031b1a-af52-43da-ab71-8892ec3e8c7f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ae91c6e7ce409       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago       Running             coredns                   2                   223799266f1f2       coredns-7db6d8ff4d-m6rv2
	a9d7219f8e908       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   24 seconds ago       Running             kube-controller-manager   2                   be59c16ee9e26       kube-controller-manager-pause-908631
	de2a3f0590146       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   24 seconds ago       Running             kube-scheduler            2                   e5fd4884e1a69       kube-scheduler-pause-908631
	23fb83bcaadca       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   24 seconds ago       Running             kube-apiserver            2                   fa02d9d59730d       kube-apiserver-pause-908631
	684f34a2c5626       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago       Running             etcd                      2                   bf6d7cca9e5b3       etcd-pause-908631
	e418602c155a1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   47 seconds ago       Exited              coredns                   1                   223799266f1f2       coredns-7db6d8ff4d-m6rv2
	58a4650caf704       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   48 seconds ago       Exited              kube-apiserver            1                   fa02d9d59730d       kube-apiserver-pause-908631
	ccb06696e8434       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   48 seconds ago       Running             kube-proxy                1                   5f6b9bb54c523       kube-proxy-sdch9
	a2c74ac35bbee       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   48 seconds ago       Exited              kube-scheduler            1                   e5fd4884e1a69       kube-scheduler-pause-908631
	81d0597919597       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   48 seconds ago       Exited              etcd                      1                   bf6d7cca9e5b3       etcd-pause-908631
	a7e349bedd9ef       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   48 seconds ago       Exited              kube-controller-manager   1                   be59c16ee9e26       kube-controller-manager-pause-908631
	d90eacd5648d2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   About a minute ago   Exited              kube-proxy                0                   cde87e8667a17       kube-proxy-sdch9
	
	
	==> coredns [ae91c6e7ce409ccedacf420a0b97114921eae1817239f4928856d7cfd774b09c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41793 - 4350 "HINFO IN 5747257240772004115.6068251950412122109. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015200388s
	
	
	==> coredns [e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:48830 - 14795 "HINFO IN 3627155112408540510.8888645309219256947. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013593942s
	
	
	==> describe nodes <==
	Name:               pause-908631
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-908631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=pause-908631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_59_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:59:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-908631
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:01:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:01:05 +0000   Sat, 03 Aug 2024 23:59:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:01:05 +0000   Sat, 03 Aug 2024 23:59:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:01:05 +0000   Sat, 03 Aug 2024 23:59:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:01:05 +0000   Sat, 03 Aug 2024 23:59:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.32
	  Hostname:    pause-908631
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 d45c5128bc6b470c9391b7fde7d11daf
	  System UUID:                d45c5128-bc6b-470c-9391-b7fde7d11daf
	  Boot ID:                    f45357b6-996c-4eef-86c2-ddf8dc839719
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-m6rv2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     105s
	  kube-system                 etcd-pause-908631                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kube-apiserver-pause-908631             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-pause-908631    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-proxy-sdch9                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-scheduler-pause-908631             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 44s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m6s)  kubelet          Node pause-908631 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node pause-908631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node pause-908631 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node pause-908631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node pause-908631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node pause-908631 status is now: NodeHasSufficientPID
	  Normal  NodeReady                119s                 kubelet          Node pause-908631 status is now: NodeReady
	  Normal  RegisteredNode           107s                 node-controller  Node pause-908631 event: Registered Node pause-908631 in Controller
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)    kubelet          Node pause-908631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)    kubelet          Node pause-908631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)    kubelet          Node pause-908631 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                   node-controller  Node pause-908631 event: Registered Node pause-908631 in Controller
	
	
	==> dmesg <==
	[  +9.491676] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.076411] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078187] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.211012] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.152066] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.316503] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.671192] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.060098] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.824086] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +1.215545] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.349179] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	[  +0.080272] kauditd_printk_skb: 10 callbacks suppressed
	[ +14.811804] systemd-fstab-generator[1482]: Ignoring "noauto" option for root device
	[  +0.082825] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.560452] kauditd_printk_skb: 88 callbacks suppressed
	[Aug 4 00:00] systemd-fstab-generator[2358]: Ignoring "noauto" option for root device
	[  +0.189663] systemd-fstab-generator[2370]: Ignoring "noauto" option for root device
	[  +0.205539] systemd-fstab-generator[2384]: Ignoring "noauto" option for root device
	[  +0.161563] systemd-fstab-generator[2396]: Ignoring "noauto" option for root device
	[  +0.372100] systemd-fstab-generator[2424]: Ignoring "noauto" option for root device
	[  +0.859663] systemd-fstab-generator[2549]: Ignoring "noauto" option for root device
	[  +5.310577] kauditd_printk_skb: 195 callbacks suppressed
	[Aug 4 00:01] systemd-fstab-generator[3387]: Ignoring "noauto" option for root device
	[  +5.610384] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.219837] systemd-fstab-generator[3716]: Ignoring "noauto" option for root device
	
	
	==> etcd [684f34a2c562666264f82283e92f93d0deef0c8df4412620054c0d4ba8934e84] <==
	{"level":"info","ts":"2024-08-04T00:01:02.273395Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:01:02.273438Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:01:02.273785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec switched to configuration voters=(18146372362501279212)"}
	{"level":"info","ts":"2024-08-04T00:01:02.273912Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2484c988a436b7d1","local-member-id":"fbd4dd8524dacdec","added-peer-id":"fbd4dd8524dacdec","added-peer-peer-urls":["https://192.168.50.32:2380"]}
	{"level":"info","ts":"2024-08-04T00:01:02.27416Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2484c988a436b7d1","local-member-id":"fbd4dd8524dacdec","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:01:02.274245Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:01:02.290371Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T00:01:02.290989Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fbd4dd8524dacdec","initial-advertise-peer-urls":["https://192.168.50.32:2380"],"listen-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:01:02.290717Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-08-04T00:01:02.293163Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-08-04T00:01:02.291998Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T00:01:03.786715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-04T00:01:03.786772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:01:03.786799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgPreVoteResp from fbd4dd8524dacdec at term 3"}
	{"level":"info","ts":"2024-08-04T00:01:03.786818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became candidate at term 4"}
	{"level":"info","ts":"2024-08-04T00:01:03.786823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgVoteResp from fbd4dd8524dacdec at term 4"}
	{"level":"info","ts":"2024-08-04T00:01:03.786831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became leader at term 4"}
	{"level":"info","ts":"2024-08-04T00:01:03.786838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbd4dd8524dacdec elected leader fbd4dd8524dacdec at term 4"}
	{"level":"info","ts":"2024-08-04T00:01:03.792572Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fbd4dd8524dacdec","local-member-attributes":"{Name:pause-908631 ClientURLs:[https://192.168.50.32:2379]}","request-path":"/0/members/fbd4dd8524dacdec/attributes","cluster-id":"2484c988a436b7d1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:01:03.792679Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:01:03.793212Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:01:03.796239Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:01:03.796275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:01:03.798132Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:01:03.799804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.32:2379"}
	
	
	==> etcd [81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47] <==
	{"level":"info","ts":"2024-08-04T00:00:40.613849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-04T00:00:40.613932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgPreVoteResp from fbd4dd8524dacdec at term 2"}
	{"level":"info","ts":"2024-08-04T00:00:40.613967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:00:40.613993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgVoteResp from fbd4dd8524dacdec at term 3"}
	{"level":"info","ts":"2024-08-04T00:00:40.61402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became leader at term 3"}
	{"level":"info","ts":"2024-08-04T00:00:40.61415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbd4dd8524dacdec elected leader fbd4dd8524dacdec at term 3"}
	{"level":"info","ts":"2024-08-04T00:00:40.619238Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fbd4dd8524dacdec","local-member-attributes":"{Name:pause-908631 ClientURLs:[https://192.168.50.32:2379]}","request-path":"/0/members/fbd4dd8524dacdec/attributes","cluster-id":"2484c988a436b7d1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:00:40.619248Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:00:40.619733Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:00:40.619772Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:00:40.619301Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:00:40.621903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.32:2379"}
	{"level":"info","ts":"2024-08-04T00:00:40.622814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-04T00:00:56.94253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.609975ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14838394416089741953 > lease_revoke:<id:4dec911aaefdacec>","response":"size:28"}
	{"level":"warn","ts":"2024-08-04T00:00:57.086976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.348799ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14838394416089741954 > lease_revoke:<id:4dec911aaefdaca3>","response":"size:28"}
	{"level":"info","ts":"2024-08-04T00:00:59.213088Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-04T00:00:59.213135Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-908631","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"]}
	{"level":"warn","ts":"2024-08-04T00:00:59.213221Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:00:59.21326Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:00:59.21499Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:00:59.215016Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.32:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-04T00:00:59.215104Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fbd4dd8524dacdec","current-leader-member-id":"fbd4dd8524dacdec"}
	{"level":"info","ts":"2024-08-04T00:00:59.218594Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-08-04T00:00:59.218772Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-08-04T00:00:59.2188Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-908631","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"]}
	
	
	==> kernel <==
	 00:01:26 up 2 min,  0 users,  load average: 1.06, 0.54, 0.21
	Linux pause-908631 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [23fb83bcaadca15284a1a25074d9f2405c81ff66b46b4aa95280ea46bdffc36e] <==
	I0804 00:01:05.401254       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 00:01:05.412130       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0804 00:01:05.475928       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 00:01:05.476136       1 policy_source.go:224] refreshing policies
	I0804 00:01:05.476096       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 00:01:05.487306       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0804 00:01:05.487396       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0804 00:01:05.487532       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 00:01:05.487595       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 00:01:05.487734       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 00:01:05.488113       1 aggregator.go:165] initial CRD sync complete...
	I0804 00:01:05.488141       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 00:01:05.488147       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 00:01:05.488152       1 cache.go:39] Caches are synced for autoregister controller
	I0804 00:01:05.487347       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 00:01:05.519706       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 00:01:06.289404       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0804 00:01:06.637557       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.32]
	I0804 00:01:06.638953       1 controller.go:615] quota admission added evaluator for: endpoints
	I0804 00:01:06.644583       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0804 00:01:06.894185       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 00:01:06.906115       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 00:01:06.952847       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 00:01:06.989301       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 00:01:06.997843       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70] <==
	I0804 00:00:49.007433       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0804 00:00:49.007493       1 establishing_controller.go:87] Shutting down EstablishingController
	I0804 00:00:49.007520       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0804 00:00:49.007530       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0804 00:00:49.007595       1 naming_controller.go:302] Shutting down NamingConditionController
	I0804 00:00:49.007621       1 controller.go:167] Shutting down OpenAPI controller
	I0804 00:00:49.008814       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0804 00:00:49.008849       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0804 00:00:49.009822       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 00:00:49.009944       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 00:00:49.010029       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0804 00:00:49.010099       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0804 00:00:49.010112       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0804 00:00:49.010141       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0804 00:00:49.010223       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 00:00:49.010242       1 controller.go:157] Shutting down quota evaluator
	I0804 00:00:49.010270       1 controller.go:176] quota evaluator worker shutdown
	I0804 00:00:49.012999       1 controller.go:176] quota evaluator worker shutdown
	I0804 00:00:49.013134       1 controller.go:176] quota evaluator worker shutdown
	I0804 00:00:49.013161       1 controller.go:176] quota evaluator worker shutdown
	I0804 00:00:49.013218       1 controller.go:176] quota evaluator worker shutdown
	I0804 00:00:49.013340       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0804 00:00:49.013367       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 00:00:49.014144       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0804 00:00:49.013358       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	
	==> kube-controller-manager [a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11] <==
	I0804 00:00:44.041746       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0804 00:00:44.044216       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0804 00:00:44.044540       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0804 00:00:44.044752       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0804 00:00:44.047939       1 controllermanager.go:761] "Started controller" controller="token-cleaner-controller"
	I0804 00:00:44.048020       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0804 00:00:44.048090       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0804 00:00:44.048115       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0804 00:00:44.050255       1 controllermanager.go:761] "Started controller" controller="persistentvolume-binder-controller"
	I0804 00:00:44.051136       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0804 00:00:44.051218       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0804 00:00:44.057947       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0804 00:00:44.059705       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0804 00:00:44.059736       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0804 00:00:44.076735       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0804 00:00:44.077009       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0804 00:00:44.081510       1 controllermanager.go:761] "Started controller" controller="endpointslice-mirroring-controller"
	I0804 00:00:44.081661       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0804 00:00:44.081695       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0804 00:00:44.100249       1 shared_informer.go:320] Caches are synced for tokens
	W0804 00:00:54.085806       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.32:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.32:8443: connect: connection refused
	W0804 00:00:54.587597       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.32:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.32:8443: connect: connection refused
	W0804 00:00:55.588441       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.32:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.32:8443: connect: connection refused
	W0804 00:00:57.589594       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.32:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.32:8443: connect: connection refused
	E0804 00:00:57.589773       1 cidr_allocator.go:146] "Failed to list all nodes" err="Get \"https://192.168.50.32:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-ipam-controller"
	
	
	==> kube-controller-manager [a9d7219f8e908cf502a986af6fc0db19e799b8fc9401003a28ff961ea179dda6] <==
	I0804 00:01:18.437448       1 shared_informer.go:320] Caches are synced for attach detach
	I0804 00:01:18.440709       1 shared_informer.go:320] Caches are synced for PVC protection
	I0804 00:01:18.441955       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0804 00:01:18.457222       1 shared_informer.go:320] Caches are synced for deployment
	I0804 00:01:18.468332       1 shared_informer.go:320] Caches are synced for HPA
	I0804 00:01:18.470851       1 shared_informer.go:320] Caches are synced for stateful set
	I0804 00:01:18.473768       1 shared_informer.go:320] Caches are synced for endpoint
	I0804 00:01:18.473931       1 shared_informer.go:320] Caches are synced for job
	I0804 00:01:18.473947       1 shared_informer.go:320] Caches are synced for ephemeral
	I0804 00:01:18.473961       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0804 00:01:18.474475       1 shared_informer.go:320] Caches are synced for persistent volume
	I0804 00:01:18.479627       1 shared_informer.go:320] Caches are synced for taint
	I0804 00:01:18.480407       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0804 00:01:18.481105       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-908631"
	I0804 00:01:18.482666       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0804 00:01:18.490770       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0804 00:01:18.493472       1 shared_informer.go:320] Caches are synced for GC
	I0804 00:01:18.496006       1 shared_informer.go:320] Caches are synced for daemon sets
	I0804 00:01:18.497262       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0804 00:01:18.497455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="90.117µs"
	I0804 00:01:18.511160       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:01:18.516302       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:01:18.943850       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:01:18.952448       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:01:18.952562       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [ccb06696e8434cfb5575a4f3728363b0e78c8a60e26c1b4adb3e74221d69764a] <==
	I0804 00:00:39.902496       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:00:42.058504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.32"]
	I0804 00:00:42.110741       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:00:42.110842       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:00:42.110876       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:00:42.113523       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:00:42.113940       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:00:42.114279       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:00:42.115638       1 config.go:192] "Starting service config controller"
	I0804 00:00:42.115923       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:00:42.116084       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:00:42.116138       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:00:42.118421       1 config.go:319] "Starting node config controller"
	I0804 00:00:42.118462       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:00:42.216305       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:00:42.216404       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:00:42.220764       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d90eacd5648d22505b09200702258fde7b9f17dadd6d88982904d6f814f7db7c] <==
	I0803 23:59:43.623538       1 server_linux.go:69] "Using iptables proxy"
	I0803 23:59:44.047690       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.32"]
	I0803 23:59:44.181984       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 23:59:44.182115       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 23:59:44.182146       1 server_linux.go:165] "Using iptables Proxier"
	I0803 23:59:44.187151       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 23:59:44.187502       1 server.go:872] "Version info" version="v1.30.3"
	I0803 23:59:44.187562       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:59:44.191955       1 config.go:192] "Starting service config controller"
	I0803 23:59:44.192404       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 23:59:44.192495       1 config.go:101] "Starting endpoint slice config controller"
	I0803 23:59:44.192534       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 23:59:44.196120       1 config.go:319] "Starting node config controller"
	I0803 23:59:44.196230       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 23:59:44.293603       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 23:59:44.293729       1 shared_informer.go:320] Caches are synced for service config
	I0803 23:59:44.296407       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9] <==
	I0804 00:00:39.639723       1 serving.go:380] Generated self-signed cert in-memory
	W0804 00:00:41.998461       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 00:00:41.998603       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 00:00:41.998633       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 00:00:41.998708       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 00:00:42.055779       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:00:42.056272       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:00:42.060666       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:00:42.060762       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:00:42.061224       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:00:42.061407       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:00:42.164253       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0804 00:00:59.063837       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [de2a3f0590146cbe0e9206a192e8087792a2eb05d17d6c2d1211a54c5c3dbc08] <==
	I0804 00:01:03.212820       1 serving.go:380] Generated self-signed cert in-memory
	W0804 00:01:05.387756       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 00:01:05.387847       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 00:01:05.387857       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 00:01:05.387863       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 00:01:05.425323       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:01:05.425403       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:01:05.426961       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:01:05.427146       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:01:05.427185       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:01:05.427220       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:01:05.528175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.307635    3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e43b02ecebb11bd0d968246cda47b523-kubeconfig\") pod \"kube-scheduler-pause-908631\" (UID: \"e43b02ecebb11bd0d968246cda47b523\") " pod="kube-system/kube-scheduler-pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.307655    3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/48028332f8f97dca40ee799525fc5447-etcd-certs\") pod \"etcd-pause-908631\" (UID: \"48028332f8f97dca40ee799525fc5447\") " pod="kube-system/etcd-pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.307671    3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3cce1012fc67d8e7aa9fe9c3ac0e186-ca-certs\") pod \"kube-apiserver-pause-908631\" (UID: \"c3cce1012fc67d8e7aa9fe9c3ac0e186\") " pod="kube-system/kube-apiserver-pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.307685    3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62c02dfac9880013304d4fe84d69a808-ca-certs\") pod \"kube-controller-manager-pause-908631\" (UID: \"62c02dfac9880013304d4fe84d69a808\") " pod="kube-system/kube-controller-manager-pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.371834    3394 kubelet_node_status.go:73] "Attempting to register node" node="pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: E0804 00:01:01.372804    3394 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.32:8443: connect: connection refused" node="pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.532320    3394 scope.go:117] "RemoveContainer" containerID="81d0597919597e001422342d2504ce8fbf034510199d7fc3fa76ab9f4477ff47"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.533430    3394 scope.go:117] "RemoveContainer" containerID="58a4650caf7043b60b2341b55f231d4e07eb9c3c6ffd0d019f6cfd094b310a70"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.534733    3394 scope.go:117] "RemoveContainer" containerID="a2c74ac35bbeecee7a2b95b2c5ee8c6233d9b6c2bfbc3bdd51ef2bb98fb5a0d9"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.535794    3394 scope.go:117] "RemoveContainer" containerID="a7e349bedd9ef995c544f2534edad5c5e71c47952f4fe9612a8d1842e35a4d11"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: E0804 00:01:01.675016    3394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-908631?timeout=10s\": dial tcp 192.168.50.32:8443: connect: connection refused" interval="800ms"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: I0804 00:01:01.775224    3394 kubelet_node_status.go:73] "Attempting to register node" node="pause-908631"
	Aug 04 00:01:01 pause-908631 kubelet[3394]: E0804 00:01:01.777182    3394 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.32:8443: connect: connection refused" node="pause-908631"
	Aug 04 00:01:02 pause-908631 kubelet[3394]: I0804 00:01:02.579988    3394 kubelet_node_status.go:73] "Attempting to register node" node="pause-908631"
	Aug 04 00:01:05 pause-908631 kubelet[3394]: I0804 00:01:05.552380    3394 kubelet_node_status.go:112] "Node was previously registered" node="pause-908631"
	Aug 04 00:01:05 pause-908631 kubelet[3394]: I0804 00:01:05.552479    3394 kubelet_node_status.go:76] "Successfully registered node" node="pause-908631"
	Aug 04 00:01:05 pause-908631 kubelet[3394]: I0804 00:01:05.554115    3394 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 04 00:01:05 pause-908631 kubelet[3394]: I0804 00:01:05.555000    3394 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.055738    3394 apiserver.go:52] "Watching apiserver"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.059486    3394 topology_manager.go:215] "Topology Admit Handler" podUID="9713215c-dca4-47f8-97c1-b0fa2bf7735e" podNamespace="kube-system" podName="kube-proxy-sdch9"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.059639    3394 topology_manager.go:215] "Topology Admit Handler" podUID="19bde3ef-d3d3-48bf-a30b-a59535e9d71d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-m6rv2"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.082863    3394 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.084199    3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9713215c-dca4-47f8-97c1-b0fa2bf7735e-xtables-lock\") pod \"kube-proxy-sdch9\" (UID: \"9713215c-dca4-47f8-97c1-b0fa2bf7735e\") " pod="kube-system/kube-proxy-sdch9"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.084260    3394 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9713215c-dca4-47f8-97c1-b0fa2bf7735e-lib-modules\") pod \"kube-proxy-sdch9\" (UID: \"9713215c-dca4-47f8-97c1-b0fa2bf7735e\") " pod="kube-system/kube-proxy-sdch9"
	Aug 04 00:01:06 pause-908631 kubelet[3394]: I0804 00:01:06.361159    3394 scope.go:117] "RemoveContainer" containerID="e418602c155a1b0cc2d03e45ba1102f736cad12fd5528d64e31704f1cb0bb415"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-908631 -n pause-908631
helpers_test.go:261: (dbg) Run:  kubectl --context pause-908631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (63.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (288.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-576210 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-576210 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m48.383892962s)

                                                
                                                
-- stdout --
	* [old-k8s-version-576210] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-576210" primary control-plane node in "old-k8s-version-576210" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:03:44.521707   60214 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:03:44.521866   60214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:03:44.521896   60214 out.go:304] Setting ErrFile to fd 2...
	I0804 00:03:44.521907   60214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:03:44.522102   60214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:03:44.522688   60214 out.go:298] Setting JSON to false
	I0804 00:03:44.523727   60214 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6368,"bootTime":1722723456,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:03:44.523813   60214 start.go:139] virtualization: kvm guest
	I0804 00:03:44.526263   60214 out.go:177] * [old-k8s-version-576210] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:03:44.527674   60214 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:03:44.527734   60214 notify.go:220] Checking for updates...
	I0804 00:03:44.530573   60214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:03:44.532093   60214 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:03:44.533568   60214 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:03:44.535197   60214 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:03:44.536703   60214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:03:44.538802   60214 config.go:182] Loaded profile config "NoKubernetes-551054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:03:44.538929   60214 config.go:182] Loaded profile config "cert-expiration-705918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:03:44.539055   60214 config.go:182] Loaded profile config "kubernetes-upgrade-302198": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:03:44.539190   60214 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:03:44.579300   60214 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 00:03:44.580704   60214 start.go:297] selected driver: kvm2
	I0804 00:03:44.580724   60214 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:03:44.580739   60214 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:03:44.581623   60214 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:03:44.581704   60214 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:03:44.597469   60214 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:03:44.597532   60214 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:03:44.597803   60214 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:03:44.597841   60214 cni.go:84] Creating CNI manager for ""
	I0804 00:03:44.597857   60214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:03:44.597867   60214 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 00:03:44.597957   60214 start.go:340] cluster config:
	{Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:03:44.598084   60214 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:03:44.599972   60214 out.go:177] * Starting "old-k8s-version-576210" primary control-plane node in "old-k8s-version-576210" cluster
	I0804 00:03:44.601607   60214 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:03:44.601668   60214 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0804 00:03:44.601681   60214 cache.go:56] Caching tarball of preloaded images
	I0804 00:03:44.601774   60214 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:03:44.601787   60214 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0804 00:03:44.601910   60214 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/config.json ...
	I0804 00:03:44.601935   60214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/config.json: {Name:mkd2a527d8ed1bf5fb5e6186b932d6f9d027038b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:03:44.602093   60214 start.go:360] acquireMachinesLock for old-k8s-version-576210: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:04:03.526434   60214 start.go:364] duration metric: took 18.924309812s to acquireMachinesLock for "old-k8s-version-576210"
	I0804 00:04:03.526514   60214 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:04:03.526634   60214 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 00:04:03.529676   60214 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0804 00:04:03.529878   60214 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:04:03.529933   60214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:04:03.547206   60214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40427
	I0804 00:04:03.547648   60214 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:04:03.548311   60214 main.go:141] libmachine: Using API Version  1
	I0804 00:04:03.548337   60214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:04:03.548748   60214 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:04:03.548953   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:04:03.549144   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:04:03.549318   60214 start.go:159] libmachine.API.Create for "old-k8s-version-576210" (driver="kvm2")
	I0804 00:04:03.549370   60214 client.go:168] LocalClient.Create starting
	I0804 00:04:03.549411   60214 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0804 00:04:03.549451   60214 main.go:141] libmachine: Decoding PEM data...
	I0804 00:04:03.549484   60214 main.go:141] libmachine: Parsing certificate...
	I0804 00:04:03.549573   60214 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0804 00:04:03.549599   60214 main.go:141] libmachine: Decoding PEM data...
	I0804 00:04:03.549608   60214 main.go:141] libmachine: Parsing certificate...
	I0804 00:04:03.549623   60214 main.go:141] libmachine: Running pre-create checks...
	I0804 00:04:03.549630   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .PreCreateCheck
	I0804 00:04:03.549982   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetConfigRaw
	I0804 00:04:03.550451   60214 main.go:141] libmachine: Creating machine...
	I0804 00:04:03.550464   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .Create
	I0804 00:04:03.550581   60214 main.go:141] libmachine: (old-k8s-version-576210) Creating KVM machine...
	I0804 00:04:03.552014   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found existing default KVM network
	I0804 00:04:03.553608   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:03.553415   60413 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e6:54:16} reservation:<nil>}
	I0804 00:04:03.554641   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:03.554546   60413 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:72:6c:d1} reservation:<nil>}
	I0804 00:04:03.555597   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:03.555506   60413 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:76:b7:1a} reservation:<nil>}
	I0804 00:04:03.556975   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:03.556881   60413 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289920}
	I0804 00:04:03.557024   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | created network xml: 
	I0804 00:04:03.557046   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | <network>
	I0804 00:04:03.557057   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG |   <name>mk-old-k8s-version-576210</name>
	I0804 00:04:03.557068   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG |   <dns enable='no'/>
	I0804 00:04:03.557076   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG |   
	I0804 00:04:03.557088   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0804 00:04:03.557129   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG |     <dhcp>
	I0804 00:04:03.557145   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0804 00:04:03.557158   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG |     </dhcp>
	I0804 00:04:03.557170   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG |   </ip>
	I0804 00:04:03.557181   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG |   
	I0804 00:04:03.557188   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | </network>
	I0804 00:04:03.557206   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | 
	I0804 00:04:03.563141   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | trying to create private KVM network mk-old-k8s-version-576210 192.168.72.0/24...
	I0804 00:04:03.639421   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | private KVM network mk-old-k8s-version-576210 192.168.72.0/24 created
	I0804 00:04:03.639456   60214 main.go:141] libmachine: (old-k8s-version-576210) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210 ...
	I0804 00:04:03.639481   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:03.639357   60413 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:04:03.639515   60214 main.go:141] libmachine: (old-k8s-version-576210) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:04:03.639557   60214 main.go:141] libmachine: (old-k8s-version-576210) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 00:04:03.894549   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:03.894422   60413 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa...
	I0804 00:04:04.196974   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:04.196823   60413 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/old-k8s-version-576210.rawdisk...
	I0804 00:04:04.197004   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Writing magic tar header
	I0804 00:04:04.197021   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Writing SSH key tar header
	I0804 00:04:04.197098   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:04.197019   60413 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210 ...
	I0804 00:04:04.197193   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210
	I0804 00:04:04.197211   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0804 00:04:04.197229   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:04:04.197242   60214 main.go:141] libmachine: (old-k8s-version-576210) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210 (perms=drwx------)
	I0804 00:04:04.197259   60214 main.go:141] libmachine: (old-k8s-version-576210) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0804 00:04:04.197278   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0804 00:04:04.197297   60214 main.go:141] libmachine: (old-k8s-version-576210) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0804 00:04:04.197310   60214 main.go:141] libmachine: (old-k8s-version-576210) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0804 00:04:04.197319   60214 main.go:141] libmachine: (old-k8s-version-576210) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 00:04:04.197334   60214 main.go:141] libmachine: (old-k8s-version-576210) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 00:04:04.197342   60214 main.go:141] libmachine: (old-k8s-version-576210) Creating domain...
	I0804 00:04:04.197421   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 00:04:04.197453   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Checking permissions on dir: /home/jenkins
	I0804 00:04:04.197467   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Checking permissions on dir: /home
	I0804 00:04:04.197479   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Skipping /home - not owner
	I0804 00:04:04.198892   60214 main.go:141] libmachine: (old-k8s-version-576210) define libvirt domain using xml: 
	I0804 00:04:04.198915   60214 main.go:141] libmachine: (old-k8s-version-576210) <domain type='kvm'>
	I0804 00:04:04.198925   60214 main.go:141] libmachine: (old-k8s-version-576210)   <name>old-k8s-version-576210</name>
	I0804 00:04:04.198933   60214 main.go:141] libmachine: (old-k8s-version-576210)   <memory unit='MiB'>2200</memory>
	I0804 00:04:04.198942   60214 main.go:141] libmachine: (old-k8s-version-576210)   <vcpu>2</vcpu>
	I0804 00:04:04.198953   60214 main.go:141] libmachine: (old-k8s-version-576210)   <features>
	I0804 00:04:04.198965   60214 main.go:141] libmachine: (old-k8s-version-576210)     <acpi/>
	I0804 00:04:04.198976   60214 main.go:141] libmachine: (old-k8s-version-576210)     <apic/>
	I0804 00:04:04.198998   60214 main.go:141] libmachine: (old-k8s-version-576210)     <pae/>
	I0804 00:04:04.199008   60214 main.go:141] libmachine: (old-k8s-version-576210)     
	I0804 00:04:04.199017   60214 main.go:141] libmachine: (old-k8s-version-576210)   </features>
	I0804 00:04:04.199028   60214 main.go:141] libmachine: (old-k8s-version-576210)   <cpu mode='host-passthrough'>
	I0804 00:04:04.199046   60214 main.go:141] libmachine: (old-k8s-version-576210)   
	I0804 00:04:04.199057   60214 main.go:141] libmachine: (old-k8s-version-576210)   </cpu>
	I0804 00:04:04.199068   60214 main.go:141] libmachine: (old-k8s-version-576210)   <os>
	I0804 00:04:04.199079   60214 main.go:141] libmachine: (old-k8s-version-576210)     <type>hvm</type>
	I0804 00:04:04.199089   60214 main.go:141] libmachine: (old-k8s-version-576210)     <boot dev='cdrom'/>
	I0804 00:04:04.199097   60214 main.go:141] libmachine: (old-k8s-version-576210)     <boot dev='hd'/>
	I0804 00:04:04.199105   60214 main.go:141] libmachine: (old-k8s-version-576210)     <bootmenu enable='no'/>
	I0804 00:04:04.199112   60214 main.go:141] libmachine: (old-k8s-version-576210)   </os>
	I0804 00:04:04.199126   60214 main.go:141] libmachine: (old-k8s-version-576210)   <devices>
	I0804 00:04:04.199143   60214 main.go:141] libmachine: (old-k8s-version-576210)     <disk type='file' device='cdrom'>
	I0804 00:04:04.199160   60214 main.go:141] libmachine: (old-k8s-version-576210)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/boot2docker.iso'/>
	I0804 00:04:04.199171   60214 main.go:141] libmachine: (old-k8s-version-576210)       <target dev='hdc' bus='scsi'/>
	I0804 00:04:04.199180   60214 main.go:141] libmachine: (old-k8s-version-576210)       <readonly/>
	I0804 00:04:04.199190   60214 main.go:141] libmachine: (old-k8s-version-576210)     </disk>
	I0804 00:04:04.199200   60214 main.go:141] libmachine: (old-k8s-version-576210)     <disk type='file' device='disk'>
	I0804 00:04:04.199212   60214 main.go:141] libmachine: (old-k8s-version-576210)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 00:04:04.199229   60214 main.go:141] libmachine: (old-k8s-version-576210)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/old-k8s-version-576210.rawdisk'/>
	I0804 00:04:04.199241   60214 main.go:141] libmachine: (old-k8s-version-576210)       <target dev='hda' bus='virtio'/>
	I0804 00:04:04.199252   60214 main.go:141] libmachine: (old-k8s-version-576210)     </disk>
	I0804 00:04:04.199263   60214 main.go:141] libmachine: (old-k8s-version-576210)     <interface type='network'>
	I0804 00:04:04.199273   60214 main.go:141] libmachine: (old-k8s-version-576210)       <source network='mk-old-k8s-version-576210'/>
	I0804 00:04:04.199284   60214 main.go:141] libmachine: (old-k8s-version-576210)       <model type='virtio'/>
	I0804 00:04:04.199295   60214 main.go:141] libmachine: (old-k8s-version-576210)     </interface>
	I0804 00:04:04.199307   60214 main.go:141] libmachine: (old-k8s-version-576210)     <interface type='network'>
	I0804 00:04:04.199319   60214 main.go:141] libmachine: (old-k8s-version-576210)       <source network='default'/>
	I0804 00:04:04.199329   60214 main.go:141] libmachine: (old-k8s-version-576210)       <model type='virtio'/>
	I0804 00:04:04.199351   60214 main.go:141] libmachine: (old-k8s-version-576210)     </interface>
	I0804 00:04:04.199362   60214 main.go:141] libmachine: (old-k8s-version-576210)     <serial type='pty'>
	I0804 00:04:04.199370   60214 main.go:141] libmachine: (old-k8s-version-576210)       <target port='0'/>
	I0804 00:04:04.199380   60214 main.go:141] libmachine: (old-k8s-version-576210)     </serial>
	I0804 00:04:04.199388   60214 main.go:141] libmachine: (old-k8s-version-576210)     <console type='pty'>
	I0804 00:04:04.199400   60214 main.go:141] libmachine: (old-k8s-version-576210)       <target type='serial' port='0'/>
	I0804 00:04:04.199411   60214 main.go:141] libmachine: (old-k8s-version-576210)     </console>
	I0804 00:04:04.199421   60214 main.go:141] libmachine: (old-k8s-version-576210)     <rng model='virtio'>
	I0804 00:04:04.199470   60214 main.go:141] libmachine: (old-k8s-version-576210)       <backend model='random'>/dev/random</backend>
	I0804 00:04:04.199514   60214 main.go:141] libmachine: (old-k8s-version-576210)     </rng>
	I0804 00:04:04.199529   60214 main.go:141] libmachine: (old-k8s-version-576210)     
	I0804 00:04:04.199541   60214 main.go:141] libmachine: (old-k8s-version-576210)     
	I0804 00:04:04.199561   60214 main.go:141] libmachine: (old-k8s-version-576210)   </devices>
	I0804 00:04:04.199573   60214 main.go:141] libmachine: (old-k8s-version-576210) </domain>
	I0804 00:04:04.199603   60214 main.go:141] libmachine: (old-k8s-version-576210) 
	I0804 00:04:04.204569   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:97:ce:41 in network default
	I0804 00:04:04.205184   60214 main.go:141] libmachine: (old-k8s-version-576210) Ensuring networks are active...
	I0804 00:04:04.205211   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:04.205994   60214 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network default is active
	I0804 00:04:04.206517   60214 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network mk-old-k8s-version-576210 is active
	I0804 00:04:04.207045   60214 main.go:141] libmachine: (old-k8s-version-576210) Getting domain xml...
	I0804 00:04:04.207876   60214 main.go:141] libmachine: (old-k8s-version-576210) Creating domain...
	I0804 00:04:05.454388   60214 main.go:141] libmachine: (old-k8s-version-576210) Waiting to get IP...
	I0804 00:04:05.455306   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:05.455796   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:05.455825   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:05.455772   60413 retry.go:31] will retry after 278.263794ms: waiting for machine to come up
	I0804 00:04:05.736229   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:05.736774   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:05.736818   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:05.736721   60413 retry.go:31] will retry after 336.297357ms: waiting for machine to come up
	I0804 00:04:06.074235   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:06.074776   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:06.074806   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:06.074753   60413 retry.go:31] will retry after 381.688779ms: waiting for machine to come up
	I0804 00:04:06.458602   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:06.459264   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:06.459286   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:06.459210   60413 retry.go:31] will retry after 403.663955ms: waiting for machine to come up
	I0804 00:04:06.864841   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:06.865318   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:06.865346   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:06.865277   60413 retry.go:31] will retry after 511.267894ms: waiting for machine to come up
	I0804 00:04:07.378466   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:07.378812   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:07.378839   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:07.378790   60413 retry.go:31] will retry after 584.99314ms: waiting for machine to come up
	I0804 00:04:07.965535   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:07.965958   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:07.965987   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:07.965911   60413 retry.go:31] will retry after 798.403734ms: waiting for machine to come up
	I0804 00:04:08.766178   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:08.766701   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:08.766734   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:08.766626   60413 retry.go:31] will retry after 956.831111ms: waiting for machine to come up
	I0804 00:04:09.724879   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:09.725519   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:09.725542   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:09.725466   60413 retry.go:31] will retry after 1.2769079s: waiting for machine to come up
	I0804 00:04:11.003824   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:11.004298   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:11.004321   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:11.004242   60413 retry.go:31] will retry after 2.039459663s: waiting for machine to come up
	I0804 00:04:13.046261   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:13.046725   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:13.046746   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:13.046681   60413 retry.go:31] will retry after 2.573093518s: waiting for machine to come up
	I0804 00:04:15.622546   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:15.623058   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:15.623084   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:15.623011   60413 retry.go:31] will retry after 3.220837529s: waiting for machine to come up
	I0804 00:04:18.845240   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:18.845775   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:18.845800   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:18.845731   60413 retry.go:31] will retry after 3.583638755s: waiting for machine to come up
	I0804 00:04:22.430413   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:22.430842   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:04:22.430863   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:04:22.430793   60413 retry.go:31] will retry after 3.774524876s: waiting for machine to come up
	I0804 00:04:26.208866   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.209331   60214 main.go:141] libmachine: (old-k8s-version-576210) Found IP for machine: 192.168.72.154
	I0804 00:04:26.209368   60214 main.go:141] libmachine: (old-k8s-version-576210) Reserving static IP address...
	I0804 00:04:26.209385   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has current primary IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.209755   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"} in network mk-old-k8s-version-576210
	I0804 00:04:26.287442   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Getting to WaitForSSH function...
	I0804 00:04:26.287480   60214 main.go:141] libmachine: (old-k8s-version-576210) Reserved static IP address: 192.168.72.154
	I0804 00:04:26.287527   60214 main.go:141] libmachine: (old-k8s-version-576210) Waiting for SSH to be available...
	I0804 00:04:26.290113   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.290583   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:26.290614   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.290781   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH client type: external
	I0804 00:04:26.290810   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa (-rw-------)
	I0804 00:04:26.290855   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:04:26.290872   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | About to run SSH command:
	I0804 00:04:26.290885   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | exit 0
	I0804 00:04:26.417295   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | SSH cmd err, output: <nil>: 
	I0804 00:04:26.417605   60214 main.go:141] libmachine: (old-k8s-version-576210) KVM machine creation complete!
	I0804 00:04:26.417934   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetConfigRaw
	I0804 00:04:26.418522   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:04:26.418736   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:04:26.418911   60214 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 00:04:26.418928   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetState
	I0804 00:04:26.420304   60214 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 00:04:26.420321   60214 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 00:04:26.420326   60214 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 00:04:26.420332   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:04:26.422601   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.422966   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:26.422994   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.423150   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:04:26.423297   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:26.423441   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:26.423569   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:04:26.423740   60214 main.go:141] libmachine: Using SSH client type: native
	I0804 00:04:26.423974   60214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:04:26.423991   60214 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 00:04:26.532782   60214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:04:26.532811   60214 main.go:141] libmachine: Detecting the provisioner...
	I0804 00:04:26.532822   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:04:26.535654   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.535990   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:26.536014   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.536132   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:04:26.536331   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:26.536479   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:26.536619   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:04:26.536807   60214 main.go:141] libmachine: Using SSH client type: native
	I0804 00:04:26.536968   60214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:04:26.536979   60214 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 00:04:26.646120   60214 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 00:04:26.646214   60214 main.go:141] libmachine: found compatible host: buildroot
	I0804 00:04:26.646229   60214 main.go:141] libmachine: Provisioning with buildroot...
	I0804 00:04:26.646240   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:04:26.646475   60214 buildroot.go:166] provisioning hostname "old-k8s-version-576210"
	I0804 00:04:26.646500   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:04:26.646676   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:04:26.649093   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.649430   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:26.649468   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.649606   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:04:26.649919   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:26.650180   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:26.650347   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:04:26.650542   60214 main.go:141] libmachine: Using SSH client type: native
	I0804 00:04:26.650719   60214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:04:26.650734   60214 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-576210 && echo "old-k8s-version-576210" | sudo tee /etc/hostname
	I0804 00:04:26.776658   60214 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-576210
	
	I0804 00:04:26.776684   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:04:26.779164   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.779478   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:26.779507   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.779661   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:04:26.779859   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:26.780025   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:26.780185   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:04:26.780345   60214 main.go:141] libmachine: Using SSH client type: native
	I0804 00:04:26.780511   60214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:04:26.780527   60214 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-576210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-576210/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-576210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:04:26.898556   60214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:04:26.898586   60214 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:04:26.898641   60214 buildroot.go:174] setting up certificates
	I0804 00:04:26.898652   60214 provision.go:84] configureAuth start
	I0804 00:04:26.898663   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:04:26.898934   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:04:26.901576   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.901914   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:26.901946   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.902075   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:04:26.904239   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.904561   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:26.904590   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:26.904692   60214 provision.go:143] copyHostCerts
	I0804 00:04:26.904754   60214 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:04:26.904770   60214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:04:26.904837   60214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:04:26.904962   60214 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:04:26.904974   60214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:04:26.905005   60214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:04:26.905092   60214 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:04:26.905101   60214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:04:26.905129   60214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:04:26.905220   60214 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-576210 san=[127.0.0.1 192.168.72.154 localhost minikube old-k8s-version-576210]
	I0804 00:04:27.229614   60214 provision.go:177] copyRemoteCerts
	I0804 00:04:27.229671   60214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:04:27.229693   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:04:27.232189   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.232522   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:27.232554   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.232777   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:04:27.232970   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:27.233136   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:04:27.233266   60214 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:04:27.319872   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:04:27.345863   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0804 00:04:27.371644   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:04:27.396849   60214 provision.go:87] duration metric: took 498.186217ms to configureAuth
	I0804 00:04:27.396876   60214 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:04:27.397047   60214 config.go:182] Loaded profile config "old-k8s-version-576210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:04:27.397112   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:04:27.399570   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.399900   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:27.399932   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.400148   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:04:27.400351   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:27.400525   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:27.400672   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:04:27.400814   60214 main.go:141] libmachine: Using SSH client type: native
	I0804 00:04:27.401015   60214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:04:27.401031   60214 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:04:27.671546   60214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:04:27.671578   60214 main.go:141] libmachine: Checking connection to Docker...
	I0804 00:04:27.671590   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetURL
	I0804 00:04:27.672819   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using libvirt version 6000000
	I0804 00:04:27.675228   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.675594   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:27.675645   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.675823   60214 main.go:141] libmachine: Docker is up and running!
	I0804 00:04:27.675848   60214 main.go:141] libmachine: Reticulating splines...
	I0804 00:04:27.675857   60214 client.go:171] duration metric: took 24.126474603s to LocalClient.Create
	I0804 00:04:27.675894   60214 start.go:167] duration metric: took 24.12657668s to libmachine.API.Create "old-k8s-version-576210"
	I0804 00:04:27.675908   60214 start.go:293] postStartSetup for "old-k8s-version-576210" (driver="kvm2")
	I0804 00:04:27.675923   60214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:04:27.675946   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:04:27.676205   60214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:04:27.676229   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:04:27.678460   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.678776   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:27.678804   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.678987   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:04:27.679151   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:27.679307   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:04:27.679455   60214 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:04:27.768481   60214 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:04:27.772908   60214 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:04:27.772930   60214 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:04:27.772991   60214 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:04:27.773059   60214 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:04:27.773149   60214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:04:27.783016   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:04:27.808719   60214 start.go:296] duration metric: took 132.79644ms for postStartSetup
	I0804 00:04:27.808771   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetConfigRaw
	I0804 00:04:27.809296   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:04:27.811765   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.812111   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:27.812140   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.812439   60214 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/config.json ...
	I0804 00:04:27.812655   60214 start.go:128] duration metric: took 24.286009261s to createHost
	I0804 00:04:27.812679   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:04:27.814819   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.815114   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:27.815134   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.815314   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:04:27.815496   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:27.815658   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:27.815776   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:04:27.815957   60214 main.go:141] libmachine: Using SSH client type: native
	I0804 00:04:27.816109   60214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:04:27.816127   60214 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0804 00:04:27.930021   60214 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722729867.904919295
	
	I0804 00:04:27.930045   60214 fix.go:216] guest clock: 1722729867.904919295
	I0804 00:04:27.930055   60214 fix.go:229] Guest: 2024-08-04 00:04:27.904919295 +0000 UTC Remote: 2024-08-04 00:04:27.81266844 +0000 UTC m=+43.329595818 (delta=92.250855ms)
	I0804 00:04:27.930104   60214 fix.go:200] guest clock delta is within tolerance: 92.250855ms
	I0804 00:04:27.930115   60214 start.go:83] releasing machines lock for "old-k8s-version-576210", held for 24.403632414s
	I0804 00:04:27.930144   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:04:27.930398   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:04:27.933232   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.933613   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:27.933639   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.933791   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:04:27.934376   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:04:27.934538   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:04:27.934614   60214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:04:27.934661   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:04:27.934736   60214 ssh_runner.go:195] Run: cat /version.json
	I0804 00:04:27.934756   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:04:27.937298   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.937652   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:27.937697   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.937725   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.937838   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:04:27.938074   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:27.938222   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:27.938222   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:04:27.938247   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:27.938414   60214 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:04:27.938487   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:04:27.938623   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:04:27.938776   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:04:27.938920   60214 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:04:28.039444   60214 ssh_runner.go:195] Run: systemctl --version
	I0804 00:04:28.045710   60214 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:04:28.208622   60214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:04:28.215082   60214 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:04:28.215157   60214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:04:28.236517   60214 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:04:28.236545   60214 start.go:495] detecting cgroup driver to use...
	I0804 00:04:28.236615   60214 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:04:28.254055   60214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:04:28.268961   60214 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:04:28.269034   60214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:04:28.283246   60214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:04:28.297381   60214 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:04:28.427729   60214 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:04:28.591001   60214 docker.go:233] disabling docker service ...
	I0804 00:04:28.591070   60214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:04:28.607281   60214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:04:28.625017   60214 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:04:28.778325   60214 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:04:28.924213   60214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:04:28.940281   60214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:04:28.960676   60214 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0804 00:04:28.960753   60214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:04:28.971808   60214 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:04:28.971865   60214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:04:28.982887   60214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:04:28.993975   60214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:04:29.005087   60214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:04:29.016679   60214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:04:29.026430   60214 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:04:29.026502   60214 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:04:29.041377   60214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:04:29.052889   60214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:04:29.183162   60214 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:04:29.325955   60214 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:04:29.326036   60214 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:04:29.331298   60214 start.go:563] Will wait 60s for crictl version
	I0804 00:04:29.331374   60214 ssh_runner.go:195] Run: which crictl
	I0804 00:04:29.335268   60214 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:04:29.374986   60214 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:04:29.375080   60214 ssh_runner.go:195] Run: crio --version
	I0804 00:04:29.402958   60214 ssh_runner.go:195] Run: crio --version
	I0804 00:04:29.436493   60214 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0804 00:04:29.438007   60214 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:04:29.440579   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:29.440871   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:04:18 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:04:29.440892   60214 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:04:29.441091   60214 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0804 00:04:29.445451   60214 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:04:29.457966   60214 kubeadm.go:883] updating cluster {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:04:29.458065   60214 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:04:29.458116   60214 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:04:29.490517   60214 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:04:29.490597   60214 ssh_runner.go:195] Run: which lz4
	I0804 00:04:29.494553   60214 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0804 00:04:29.498691   60214 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:04:29.498721   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0804 00:04:31.160009   60214 crio.go:462] duration metric: took 1.665492408s to copy over tarball
	I0804 00:04:31.160076   60214 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:04:33.712353   60214 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.552250322s)
	I0804 00:04:33.712384   60214 crio.go:469] duration metric: took 2.552346962s to extract the tarball
	I0804 00:04:33.712393   60214 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:04:33.755900   60214 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:04:33.813458   60214 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:04:33.813484   60214 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:04:33.813561   60214 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:04:33.813595   60214 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:04:33.813562   60214 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:04:33.813648   60214 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:04:33.813657   60214 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0804 00:04:33.813624   60214 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:04:33.813666   60214 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0804 00:04:33.813621   60214 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:04:33.815195   60214 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0804 00:04:33.815201   60214 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:04:33.815226   60214 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:04:33.815234   60214 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:04:33.815197   60214 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:04:33.815195   60214 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:04:33.815194   60214 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0804 00:04:33.815201   60214 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:04:33.948148   60214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:04:33.969191   60214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:04:33.969344   60214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:04:33.979949   60214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0804 00:04:33.985725   60214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0804 00:04:34.002169   60214 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0804 00:04:34.002212   60214 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:04:34.002258   60214 ssh_runner.go:195] Run: which crictl
	I0804 00:04:34.008154   60214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:04:34.052730   60214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0804 00:04:34.139042   60214 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0804 00:04:34.139095   60214 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:04:34.139098   60214 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0804 00:04:34.139135   60214 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0804 00:04:34.139140   60214 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0804 00:04:34.139143   60214 ssh_runner.go:195] Run: which crictl
	I0804 00:04:34.139168   60214 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:04:34.139051   60214 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0804 00:04:34.139182   60214 ssh_runner.go:195] Run: which crictl
	I0804 00:04:34.139199   60214 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:04:34.139224   60214 ssh_runner.go:195] Run: which crictl
	I0804 00:04:34.139236   60214 ssh_runner.go:195] Run: which crictl
	I0804 00:04:34.139253   60214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:04:34.167353   60214 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0804 00:04:34.167406   60214 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:04:34.167457   60214 ssh_runner.go:195] Run: which crictl
	I0804 00:04:34.169728   60214 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0804 00:04:34.169772   60214 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0804 00:04:34.169820   60214 ssh_runner.go:195] Run: which crictl
	I0804 00:04:34.193547   60214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:04:34.193571   60214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:04:34.193579   60214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0804 00:04:34.193599   60214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0804 00:04:34.193637   60214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0804 00:04:34.193653   60214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:04:34.193690   60214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0804 00:04:34.313998   60214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0804 00:04:34.325453   60214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0804 00:04:34.329139   60214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0804 00:04:34.329237   60214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0804 00:04:34.332935   60214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0804 00:04:34.333094   60214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0804 00:04:34.730570   60214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:04:34.877791   60214 cache_images.go:92] duration metric: took 1.064289174s to LoadCachedImages
	W0804 00:04:34.877872   60214 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0804 00:04:34.877882   60214 kubeadm.go:934] updating node { 192.168.72.154 8443 v1.20.0 crio true true} ...
	I0804 00:04:34.878009   60214 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-576210 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:04:34.878096   60214 ssh_runner.go:195] Run: crio config
	I0804 00:04:34.927252   60214 cni.go:84] Creating CNI manager for ""
	I0804 00:04:34.927274   60214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:04:34.927284   60214 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:04:34.927302   60214 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.154 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-576210 NodeName:old-k8s-version-576210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0804 00:04:34.927442   60214 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-576210"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:04:34.927516   60214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0804 00:04:34.937895   60214 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:04:34.937965   60214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:04:34.947809   60214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0804 00:04:34.965280   60214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:04:34.982952   60214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0804 00:04:35.001200   60214 ssh_runner.go:195] Run: grep 192.168.72.154	control-plane.minikube.internal$ /etc/hosts
	I0804 00:04:35.006519   60214 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:04:35.019369   60214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:04:35.134108   60214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:04:35.152341   60214 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210 for IP: 192.168.72.154
	I0804 00:04:35.152368   60214 certs.go:194] generating shared ca certs ...
	I0804 00:04:35.152397   60214 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:04:35.152570   60214 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:04:35.152629   60214 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:04:35.152643   60214 certs.go:256] generating profile certs ...
	I0804 00:04:35.152716   60214 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.key
	I0804 00:04:35.152734   60214 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt with IP's: []
	I0804 00:04:35.273972   60214 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt ...
	I0804 00:04:35.274010   60214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: {Name:mkd8d1c437f47a48cb7ce8e147469f23cd955372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:04:35.274227   60214 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.key ...
	I0804 00:04:35.274246   60214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.key: {Name:mk53d7f203fde30ebe5ec7ac63a11fb2c97d5ab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:04:35.274383   60214 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key.5357f842
	I0804 00:04:35.274409   60214 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt.5357f842 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.154]
	I0804 00:04:35.335918   60214 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt.5357f842 ...
	I0804 00:04:35.335944   60214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt.5357f842: {Name:mk6538ae819b817d3819fc1b649c2f6da0e1f200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:04:35.336096   60214 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key.5357f842 ...
	I0804 00:04:35.336109   60214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key.5357f842: {Name:mk75bf7ff1bfdbf0991e421cd5095a8309731a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:04:35.336182   60214 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt.5357f842 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt
	I0804 00:04:35.336294   60214 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key.5357f842 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key
	I0804 00:04:35.336391   60214 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key
	I0804 00:04:35.336413   60214 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.crt with IP's: []
	I0804 00:04:35.405703   60214 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.crt ...
	I0804 00:04:35.405729   60214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.crt: {Name:mke95eeb5c7cf229cd5f383ac2f5eefc1e8c1364 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:04:35.405886   60214 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key ...
	I0804 00:04:35.405900   60214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key: {Name:mk9feb1ae5778aa11c5f500424a80878aabd5cb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:04:35.406068   60214 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:04:35.406103   60214 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:04:35.406112   60214 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:04:35.406138   60214 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:04:35.406166   60214 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:04:35.406187   60214 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:04:35.406230   60214 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:04:35.406824   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:04:35.432789   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:04:35.457876   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:04:35.482152   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:04:35.506872   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0804 00:04:35.591136   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:04:35.617124   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:04:35.641732   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:04:35.667431   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:04:35.691292   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:04:35.715937   60214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:04:35.740675   60214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:04:35.758056   60214 ssh_runner.go:195] Run: openssl version
	I0804 00:04:35.764431   60214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:04:35.779147   60214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:04:35.784199   60214 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:04:35.784252   60214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:04:35.790098   60214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:04:35.801375   60214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:04:35.814556   60214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:04:35.819373   60214 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:04:35.819432   60214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:04:35.825486   60214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:04:35.837613   60214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:04:35.849822   60214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:04:35.854844   60214 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:04:35.854917   60214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:04:35.862424   60214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:04:35.876714   60214 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:04:35.881017   60214 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 00:04:35.881083   60214 kubeadm.go:392] StartCluster: {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:04:35.881164   60214 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:04:35.881210   60214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:04:35.918665   60214 cri.go:89] found id: ""
	I0804 00:04:35.918740   60214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:04:35.929582   60214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:04:35.940195   60214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:04:35.950701   60214 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:04:35.950727   60214 kubeadm.go:157] found existing configuration files:
	
	I0804 00:04:35.950779   60214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:04:35.960211   60214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:04:35.960273   60214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:04:35.971231   60214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:04:35.981812   60214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:04:35.981872   60214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:04:35.994958   60214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:04:36.004385   60214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:04:36.004457   60214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:04:36.023904   60214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:04:36.036383   60214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:04:36.036446   60214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:04:36.048709   60214 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:04:36.321621   60214 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:06:34.911601   60214 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:06:34.911708   60214 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:06:34.913288   60214 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:06:34.913377   60214 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:06:34.913476   60214 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:06:34.913710   60214 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:06:34.913860   60214 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:06:34.913954   60214 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:06:34.915837   60214 out.go:204]   - Generating certificates and keys ...
	I0804 00:06:34.915935   60214 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:06:34.916025   60214 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:06:34.916142   60214 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 00:06:34.916231   60214 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 00:06:34.916337   60214 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 00:06:34.916390   60214 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 00:06:34.916476   60214 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 00:06:34.916647   60214 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-576210] and IPs [192.168.72.154 127.0.0.1 ::1]
	I0804 00:06:34.916733   60214 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 00:06:34.916911   60214 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-576210] and IPs [192.168.72.154 127.0.0.1 ::1]
	I0804 00:06:34.917007   60214 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 00:06:34.917100   60214 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 00:06:34.917166   60214 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 00:06:34.917218   60214 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:06:34.917286   60214 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:06:34.917366   60214 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:06:34.917455   60214 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:06:34.917506   60214 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:06:34.917633   60214 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:06:34.917759   60214 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:06:34.917815   60214 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:06:34.917910   60214 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:06:34.919539   60214 out.go:204]   - Booting up control plane ...
	I0804 00:06:34.919624   60214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:06:34.919734   60214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:06:34.919855   60214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:06:34.919969   60214 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:06:34.920166   60214 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:06:34.920247   60214 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:06:34.920354   60214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:06:34.920612   60214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:06:34.920707   60214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:06:34.921033   60214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:06:34.921154   60214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:06:34.921376   60214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:06:34.921475   60214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:06:34.921746   60214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:06:34.921843   60214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:06:34.922087   60214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:06:34.922102   60214 kubeadm.go:310] 
	I0804 00:06:34.922134   60214 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:06:34.922188   60214 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:06:34.922198   60214 kubeadm.go:310] 
	I0804 00:06:34.922252   60214 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:06:34.922301   60214 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:06:34.922452   60214 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:06:34.922463   60214 kubeadm.go:310] 
	I0804 00:06:34.922613   60214 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:06:34.922652   60214 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:06:34.922680   60214 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:06:34.922690   60214 kubeadm.go:310] 
	I0804 00:06:34.922809   60214 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:06:34.922892   60214 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:06:34.922900   60214 kubeadm.go:310] 
	I0804 00:06:34.922987   60214 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:06:34.923064   60214 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:06:34.923133   60214 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:06:34.923205   60214 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:06:34.923314   60214 kubeadm.go:310] 
	W0804 00:06:34.923337   60214 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-576210] and IPs [192.168.72.154 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-576210] and IPs [192.168.72.154 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-576210] and IPs [192.168.72.154 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-576210] and IPs [192.168.72.154 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 00:06:34.923381   60214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:06:35.392750   60214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:06:35.412971   60214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:06:35.424772   60214 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:06:35.424794   60214 kubeadm.go:157] found existing configuration files:
	
	I0804 00:06:35.424843   60214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:06:35.435612   60214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:06:35.435672   60214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:06:35.445874   60214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:06:35.455254   60214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:06:35.455320   60214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:06:35.464565   60214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:06:35.473253   60214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:06:35.473302   60214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:06:35.482331   60214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:06:35.491552   60214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:06:35.491606   60214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:06:35.500676   60214 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:06:35.737772   60214 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:08:32.242209   60214 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:08:32.242299   60214 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:08:32.244203   60214 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:08:32.244322   60214 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:08:32.244433   60214 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:08:32.244558   60214 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:08:32.244672   60214 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:08:32.244773   60214 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:08:32.246608   60214 out.go:204]   - Generating certificates and keys ...
	I0804 00:08:32.246685   60214 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:08:32.246743   60214 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:08:32.246865   60214 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:08:32.246928   60214 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:08:32.246987   60214 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:08:32.247042   60214 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:08:32.247097   60214 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:08:32.247157   60214 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:08:32.247220   60214 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:08:32.247306   60214 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:08:32.247357   60214 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:08:32.247401   60214 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:08:32.247441   60214 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:08:32.247485   60214 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:08:32.247545   60214 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:08:32.247592   60214 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:08:32.247681   60214 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:08:32.247751   60214 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:08:32.247792   60214 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:08:32.247885   60214 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:08:32.250356   60214 out.go:204]   - Booting up control plane ...
	I0804 00:08:32.250440   60214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:08:32.250506   60214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:08:32.250562   60214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:08:32.250632   60214 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:08:32.250854   60214 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:08:32.250917   60214 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:08:32.250975   60214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:08:32.251195   60214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:08:32.251297   60214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:08:32.251484   60214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:08:32.251550   60214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:08:32.251758   60214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:08:32.251819   60214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:08:32.251981   60214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:08:32.252057   60214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:08:32.252349   60214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:08:32.252371   60214 kubeadm.go:310] 
	I0804 00:08:32.252428   60214 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:08:32.252484   60214 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:08:32.252495   60214 kubeadm.go:310] 
	I0804 00:08:32.252548   60214 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:08:32.252600   60214 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:08:32.252692   60214 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:08:32.252698   60214 kubeadm.go:310] 
	I0804 00:08:32.252817   60214 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:08:32.252875   60214 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:08:32.252933   60214 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:08:32.252943   60214 kubeadm.go:310] 
	I0804 00:08:32.253036   60214 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:08:32.253121   60214 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:08:32.253128   60214 kubeadm.go:310] 
	I0804 00:08:32.253213   60214 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:08:32.253280   60214 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:08:32.253401   60214 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:08:32.253484   60214 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:08:32.253538   60214 kubeadm.go:310] 
	I0804 00:08:32.253562   60214 kubeadm.go:394] duration metric: took 3m56.372483865s to StartCluster
	I0804 00:08:32.253619   60214 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:08:32.253673   60214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:08:32.298732   60214 cri.go:89] found id: ""
	I0804 00:08:32.298760   60214 logs.go:276] 0 containers: []
	W0804 00:08:32.298771   60214 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:08:32.298778   60214 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:08:32.298841   60214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:08:32.340463   60214 cri.go:89] found id: ""
	I0804 00:08:32.340492   60214 logs.go:276] 0 containers: []
	W0804 00:08:32.340501   60214 logs.go:278] No container was found matching "etcd"
	I0804 00:08:32.340508   60214 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:08:32.340571   60214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:08:32.386531   60214 cri.go:89] found id: ""
	I0804 00:08:32.386559   60214 logs.go:276] 0 containers: []
	W0804 00:08:32.386569   60214 logs.go:278] No container was found matching "coredns"
	I0804 00:08:32.386577   60214 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:08:32.386642   60214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:08:32.422632   60214 cri.go:89] found id: ""
	I0804 00:08:32.422659   60214 logs.go:276] 0 containers: []
	W0804 00:08:32.422670   60214 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:08:32.422677   60214 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:08:32.422738   60214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:08:32.466415   60214 cri.go:89] found id: ""
	I0804 00:08:32.466445   60214 logs.go:276] 0 containers: []
	W0804 00:08:32.466457   60214 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:08:32.466464   60214 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:08:32.466521   60214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:08:32.503601   60214 cri.go:89] found id: ""
	I0804 00:08:32.503631   60214 logs.go:276] 0 containers: []
	W0804 00:08:32.503641   60214 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:08:32.503649   60214 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:08:32.503729   60214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:08:32.539015   60214 cri.go:89] found id: ""
	I0804 00:08:32.539047   60214 logs.go:276] 0 containers: []
	W0804 00:08:32.539056   60214 logs.go:278] No container was found matching "kindnet"
	I0804 00:08:32.539072   60214 logs.go:123] Gathering logs for kubelet ...
	I0804 00:08:32.539092   60214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:08:32.590939   60214 logs.go:123] Gathering logs for dmesg ...
	I0804 00:08:32.590970   60214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:08:32.605962   60214 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:08:32.605996   60214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:08:32.716956   60214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:08:32.716982   60214 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:08:32.716998   60214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:08:32.810557   60214 logs.go:123] Gathering logs for container status ...
	I0804 00:08:32.810595   60214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0804 00:08:32.851916   60214 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0804 00:08:32.851965   60214 out.go:239] * 
	* 
	W0804 00:08:32.852038   60214 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:08:32.852069   60214 out.go:239] * 
	* 
	W0804 00:08:32.852984   60214 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:08:32.856256   60214 out.go:177] 
	W0804 00:08:32.857610   60214 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:08:32.857661   60214 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0804 00:08:32.857680   60214 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0804 00:08:32.859284   60214 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-576210 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 6 (221.731408ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:08:33.121979   63902 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-576210" does not appear in /home/jenkins/minikube-integration/19364-9607/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-576210" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (288.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-877598 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-877598 --alsologtostderr -v=3: exit status 82 (2m0.703301124s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-877598"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:07:19.250891   63080 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:07:19.251146   63080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:07:19.251157   63080 out.go:304] Setting ErrFile to fd 2...
	I0804 00:07:19.251162   63080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:07:19.251426   63080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:07:19.251679   63080 out.go:298] Setting JSON to false
	I0804 00:07:19.251754   63080 mustload.go:65] Loading cluster: embed-certs-877598
	I0804 00:07:19.252065   63080 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:07:19.252135   63080 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/config.json ...
	I0804 00:07:19.252301   63080 mustload.go:65] Loading cluster: embed-certs-877598
	I0804 00:07:19.252400   63080 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:07:19.252434   63080 stop.go:39] StopHost: embed-certs-877598
	I0804 00:07:19.252775   63080 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:07:19.252821   63080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:07:19.267780   63080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41405
	I0804 00:07:19.268272   63080 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:07:19.268828   63080 main.go:141] libmachine: Using API Version  1
	I0804 00:07:19.268851   63080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:07:19.269256   63080 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:07:19.271931   63080 out.go:177] * Stopping node "embed-certs-877598"  ...
	I0804 00:07:19.273268   63080 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0804 00:07:19.273298   63080 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:07:19.273577   63080 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0804 00:07:19.273621   63080 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:07:19.276580   63080 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:07:19.276957   63080 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:06:27 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:07:19.276994   63080 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:07:19.277190   63080 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:07:19.277389   63080 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:07:19.277538   63080 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:07:19.277723   63080 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:07:19.371680   63080 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0804 00:07:19.427333   63080 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0804 00:07:19.519722   63080 main.go:141] libmachine: Stopping "embed-certs-877598"...
	I0804 00:07:19.519754   63080 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:07:19.521436   63080 main.go:141] libmachine: (embed-certs-877598) Calling .Stop
	I0804 00:07:19.525478   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 0/120
	I0804 00:07:20.526950   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 1/120
	I0804 00:07:21.528272   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 2/120
	I0804 00:07:22.529671   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 3/120
	I0804 00:07:23.531959   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 4/120
	I0804 00:07:24.534188   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 5/120
	I0804 00:07:25.535677   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 6/120
	I0804 00:07:26.537458   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 7/120
	I0804 00:07:27.539745   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 8/120
	I0804 00:07:28.541326   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 9/120
	I0804 00:07:29.543574   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 10/120
	I0804 00:07:30.725270   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 11/120
	I0804 00:07:31.727101   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 12/120
	I0804 00:07:32.728324   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 13/120
	I0804 00:07:33.729808   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 14/120
	I0804 00:07:34.731827   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 15/120
	I0804 00:07:35.733320   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 16/120
	I0804 00:07:36.734832   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 17/120
	I0804 00:07:37.736171   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 18/120
	I0804 00:07:38.737604   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 19/120
	I0804 00:07:39.739685   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 20/120
	I0804 00:07:40.741211   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 21/120
	I0804 00:07:41.743025   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 22/120
	I0804 00:07:42.745285   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 23/120
	I0804 00:07:43.747085   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 24/120
	I0804 00:07:44.748991   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 25/120
	I0804 00:07:45.751288   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 26/120
	I0804 00:07:46.753329   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 27/120
	I0804 00:07:47.754697   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 28/120
	I0804 00:07:48.756365   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 29/120
	I0804 00:07:49.758272   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 30/120
	I0804 00:07:50.759575   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 31/120
	I0804 00:07:51.760835   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 32/120
	I0804 00:07:52.762388   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 33/120
	I0804 00:07:53.764110   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 34/120
	I0804 00:07:54.766115   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 35/120
	I0804 00:07:55.767508   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 36/120
	I0804 00:07:56.768770   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 37/120
	I0804 00:07:57.770187   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 38/120
	I0804 00:07:58.771685   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 39/120
	I0804 00:07:59.773656   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 40/120
	I0804 00:08:00.776001   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 41/120
	I0804 00:08:01.777641   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 42/120
	I0804 00:08:02.779000   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 43/120
	I0804 00:08:03.780387   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 44/120
	I0804 00:08:04.782271   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 45/120
	I0804 00:08:05.783716   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 46/120
	I0804 00:08:06.785253   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 47/120
	I0804 00:08:07.787119   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 48/120
	I0804 00:08:08.788673   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 49/120
	I0804 00:08:09.790963   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 50/120
	I0804 00:08:10.792391   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 51/120
	I0804 00:08:11.794016   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 52/120
	I0804 00:08:12.795812   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 53/120
	I0804 00:08:13.797509   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 54/120
	I0804 00:08:14.799521   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 55/120
	I0804 00:08:15.800839   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 56/120
	I0804 00:08:16.802193   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 57/120
	I0804 00:08:17.803571   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 58/120
	I0804 00:08:18.805972   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 59/120
	I0804 00:08:19.807994   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 60/120
	I0804 00:08:20.809451   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 61/120
	I0804 00:08:21.810746   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 62/120
	I0804 00:08:22.811941   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 63/120
	I0804 00:08:23.813817   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 64/120
	I0804 00:08:24.815764   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 65/120
	I0804 00:08:25.818340   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 66/120
	I0804 00:08:26.820147   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 67/120
	I0804 00:08:27.822231   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 68/120
	I0804 00:08:28.824311   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 69/120
	I0804 00:08:29.826394   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 70/120
	I0804 00:08:30.827785   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 71/120
	I0804 00:08:31.829181   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 72/120
	I0804 00:08:32.830922   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 73/120
	I0804 00:08:33.832594   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 74/120
	I0804 00:08:34.834639   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 75/120
	I0804 00:08:35.835795   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 76/120
	I0804 00:08:36.837123   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 77/120
	I0804 00:08:37.838599   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 78/120
	I0804 00:08:38.839953   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 79/120
	I0804 00:08:39.842207   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 80/120
	I0804 00:08:40.843751   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 81/120
	I0804 00:08:41.845089   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 82/120
	I0804 00:08:42.846429   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 83/120
	I0804 00:08:43.848058   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 84/120
	I0804 00:08:44.850072   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 85/120
	I0804 00:08:45.851510   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 86/120
	I0804 00:08:46.853448   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 87/120
	I0804 00:08:47.854860   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 88/120
	I0804 00:08:48.856526   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 89/120
	I0804 00:08:49.859087   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 90/120
	I0804 00:08:50.860752   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 91/120
	I0804 00:08:51.862223   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 92/120
	I0804 00:08:52.863556   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 93/120
	I0804 00:08:53.865710   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 94/120
	I0804 00:08:54.867639   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 95/120
	I0804 00:08:55.869050   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 96/120
	I0804 00:08:56.870410   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 97/120
	I0804 00:08:57.871728   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 98/120
	I0804 00:08:58.872931   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 99/120
	I0804 00:08:59.875057   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 100/120
	I0804 00:09:00.876394   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 101/120
	I0804 00:09:01.877704   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 102/120
	I0804 00:09:02.879216   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 103/120
	I0804 00:09:03.880982   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 104/120
	I0804 00:09:04.883322   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 105/120
	I0804 00:09:05.884612   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 106/120
	I0804 00:09:06.886022   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 107/120
	I0804 00:09:07.887468   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 108/120
	I0804 00:09:08.888743   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 109/120
	I0804 00:09:09.891108   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 110/120
	I0804 00:09:10.892544   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 111/120
	I0804 00:09:11.893959   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 112/120
	I0804 00:09:12.895389   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 113/120
	I0804 00:09:13.896860   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 114/120
	I0804 00:09:14.898990   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 115/120
	I0804 00:09:15.900242   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 116/120
	I0804 00:09:16.901663   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 117/120
	I0804 00:09:17.902899   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 118/120
	I0804 00:09:18.904311   63080 main.go:141] libmachine: (embed-certs-877598) Waiting for machine to stop 119/120
	I0804 00:09:19.905181   63080 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0804 00:09:19.905237   63080 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0804 00:09:19.907914   63080 out.go:177] 
	W0804 00:09:19.909393   63080 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0804 00:09:19.909416   63080 out.go:239] * 
	* 
	W0804 00:09:19.912144   63080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:09:19.913692   63080 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-877598 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-877598 -n embed-certs-877598
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-877598 -n embed-certs-877598: exit status 3 (18.42242301s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:09:38.337750   64248 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host
	E0804 00:09:38.337771   64248 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-877598" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-118016 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-118016 --alsologtostderr -v=3: exit status 82 (2m0.5004657s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-118016"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:08:30.527322   63867 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:08:30.527464   63867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:08:30.527474   63867 out.go:304] Setting ErrFile to fd 2...
	I0804 00:08:30.527481   63867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:08:30.527667   63867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:08:30.527923   63867 out.go:298] Setting JSON to false
	I0804 00:08:30.528015   63867 mustload.go:65] Loading cluster: no-preload-118016
	I0804 00:08:30.528330   63867 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:08:30.528408   63867 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/config.json ...
	I0804 00:08:30.528590   63867 mustload.go:65] Loading cluster: no-preload-118016
	I0804 00:08:30.528711   63867 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:08:30.528749   63867 stop.go:39] StopHost: no-preload-118016
	I0804 00:08:30.529280   63867 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:08:30.529339   63867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:08:30.544069   63867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I0804 00:08:30.544529   63867 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:08:30.545094   63867 main.go:141] libmachine: Using API Version  1
	I0804 00:08:30.545116   63867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:08:30.545472   63867 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:08:30.548083   63867 out.go:177] * Stopping node "no-preload-118016"  ...
	I0804 00:08:30.549407   63867 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0804 00:08:30.549454   63867 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:08:30.549675   63867 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0804 00:08:30.549710   63867 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:08:30.553286   63867 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:08:30.553796   63867 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:08:30.553830   63867 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:08:30.554016   63867 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:08:30.554199   63867 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:08:30.554361   63867 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:08:30.554507   63867 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:08:30.655851   63867 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0804 00:08:30.716495   63867 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0804 00:08:30.775215   63867 main.go:141] libmachine: Stopping "no-preload-118016"...
	I0804 00:08:30.775246   63867 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:08:30.776810   63867 main.go:141] libmachine: (no-preload-118016) Calling .Stop
	I0804 00:08:30.780326   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 0/120
	I0804 00:08:31.781741   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 1/120
	I0804 00:08:32.783209   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 2/120
	I0804 00:08:33.784679   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 3/120
	I0804 00:08:34.786581   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 4/120
	I0804 00:08:35.788569   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 5/120
	I0804 00:08:36.789901   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 6/120
	I0804 00:08:37.791290   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 7/120
	I0804 00:08:38.792889   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 8/120
	I0804 00:08:39.794260   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 9/120
	I0804 00:08:40.796442   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 10/120
	I0804 00:08:41.797819   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 11/120
	I0804 00:08:42.799225   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 12/120
	I0804 00:08:43.800724   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 13/120
	I0804 00:08:44.802091   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 14/120
	I0804 00:08:45.804179   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 15/120
	I0804 00:08:46.805612   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 16/120
	I0804 00:08:47.806846   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 17/120
	I0804 00:08:48.808249   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 18/120
	I0804 00:08:49.809743   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 19/120
	I0804 00:08:50.811947   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 20/120
	I0804 00:08:51.813402   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 21/120
	I0804 00:08:52.814736   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 22/120
	I0804 00:08:53.816088   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 23/120
	I0804 00:08:54.817419   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 24/120
	I0804 00:08:55.819610   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 25/120
	I0804 00:08:56.820992   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 26/120
	I0804 00:08:57.822444   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 27/120
	I0804 00:08:58.824345   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 28/120
	I0804 00:08:59.825844   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 29/120
	I0804 00:09:00.828359   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 30/120
	I0804 00:09:01.829795   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 31/120
	I0804 00:09:02.831465   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 32/120
	I0804 00:09:03.834184   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 33/120
	I0804 00:09:04.835668   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 34/120
	I0804 00:09:05.837731   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 35/120
	I0804 00:09:06.839218   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 36/120
	I0804 00:09:07.840848   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 37/120
	I0804 00:09:08.842172   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 38/120
	I0804 00:09:09.843836   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 39/120
	I0804 00:09:10.846145   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 40/120
	I0804 00:09:11.847657   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 41/120
	I0804 00:09:12.849261   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 42/120
	I0804 00:09:13.850987   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 43/120
	I0804 00:09:14.852295   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 44/120
	I0804 00:09:15.854401   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 45/120
	I0804 00:09:16.856103   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 46/120
	I0804 00:09:17.857549   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 47/120
	I0804 00:09:18.859082   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 48/120
	I0804 00:09:19.860363   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 49/120
	I0804 00:09:20.862502   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 50/120
	I0804 00:09:21.864144   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 51/120
	I0804 00:09:22.865701   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 52/120
	I0804 00:09:23.867104   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 53/120
	I0804 00:09:24.868503   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 54/120
	I0804 00:09:25.870944   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 55/120
	I0804 00:09:26.872377   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 56/120
	I0804 00:09:27.873881   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 57/120
	I0804 00:09:28.875791   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 58/120
	I0804 00:09:29.877347   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 59/120
	I0804 00:09:30.879680   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 60/120
	I0804 00:09:31.881270   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 61/120
	I0804 00:09:32.882703   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 62/120
	I0804 00:09:33.884034   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 63/120
	I0804 00:09:34.885647   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 64/120
	I0804 00:09:35.887924   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 65/120
	I0804 00:09:36.889434   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 66/120
	I0804 00:09:37.891024   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 67/120
	I0804 00:09:38.892430   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 68/120
	I0804 00:09:39.893891   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 69/120
	I0804 00:09:40.895113   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 70/120
	I0804 00:09:41.896821   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 71/120
	I0804 00:09:42.898263   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 72/120
	I0804 00:09:43.899567   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 73/120
	I0804 00:09:44.900874   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 74/120
	I0804 00:09:45.902798   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 75/120
	I0804 00:09:46.904072   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 76/120
	I0804 00:09:47.905392   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 77/120
	I0804 00:09:48.906972   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 78/120
	I0804 00:09:49.908759   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 79/120
	I0804 00:09:50.910852   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 80/120
	I0804 00:09:51.912425   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 81/120
	I0804 00:09:52.914000   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 82/120
	I0804 00:09:53.915581   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 83/120
	I0804 00:09:54.917171   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 84/120
	I0804 00:09:55.919167   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 85/120
	I0804 00:09:56.920719   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 86/120
	I0804 00:09:57.922245   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 87/120
	I0804 00:09:58.923726   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 88/120
	I0804 00:09:59.925334   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 89/120
	I0804 00:10:00.927552   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 90/120
	I0804 00:10:01.929155   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 91/120
	I0804 00:10:02.930776   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 92/120
	I0804 00:10:03.932348   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 93/120
	I0804 00:10:04.934115   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 94/120
	I0804 00:10:05.936389   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 95/120
	I0804 00:10:06.937870   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 96/120
	I0804 00:10:07.939436   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 97/120
	I0804 00:10:08.940707   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 98/120
	I0804 00:10:09.942273   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 99/120
	I0804 00:10:10.944901   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 100/120
	I0804 00:10:11.946357   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 101/120
	I0804 00:10:12.947889   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 102/120
	I0804 00:10:13.949442   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 103/120
	I0804 00:10:14.950945   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 104/120
	I0804 00:10:15.952952   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 105/120
	I0804 00:10:16.954411   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 106/120
	I0804 00:10:17.955955   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 107/120
	I0804 00:10:18.957428   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 108/120
	I0804 00:10:19.959010   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 109/120
	I0804 00:10:20.961248   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 110/120
	I0804 00:10:21.962870   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 111/120
	I0804 00:10:22.964309   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 112/120
	I0804 00:10:23.965884   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 113/120
	I0804 00:10:24.967523   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 114/120
	I0804 00:10:25.969757   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 115/120
	I0804 00:10:26.971279   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 116/120
	I0804 00:10:27.972922   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 117/120
	I0804 00:10:28.974442   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 118/120
	I0804 00:10:29.976083   63867 main.go:141] libmachine: (no-preload-118016) Waiting for machine to stop 119/120
	I0804 00:10:30.977284   63867 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0804 00:10:30.977374   63867 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0804 00:10:30.979577   63867 out.go:177] 
	W0804 00:10:30.981174   63867 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0804 00:10:30.981191   63867 out.go:239] * 
	* 
	W0804 00:10:30.983679   63867 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:10:30.985042   63867 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-118016 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118016 -n no-preload-118016
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118016 -n no-preload-118016: exit status 3 (18.51975613s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:10:49.505658   64860 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host
	E0804 00:10:49.505680   64860 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-118016" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-576210 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-576210 create -f testdata/busybox.yaml: exit status 1 (43.479894ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-576210" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-576210 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 6 (219.804218ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:08:33.385732   63942 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-576210" does not appear in /home/jenkins/minikube-integration/19364-9607/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-576210" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 6 (231.9982ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:08:33.608154   63972 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-576210" does not appear in /home/jenkins/minikube-integration/19364-9607/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-576210" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-576210 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-576210 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m35.176978315s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-576210 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-576210 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-576210 describe deploy/metrics-server -n kube-system: exit status 1 (42.064851ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-576210" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-576210 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 6 (217.517876ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:10:09.054459   64626 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-576210" does not appear in /home/jenkins/minikube-integration/19364-9607/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-576210" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (95.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-969068 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-969068 --alsologtostderr -v=3: exit status 82 (2m0.498767008s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-969068"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:09:20.842988   64300 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:09:20.843140   64300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:09:20.843153   64300 out.go:304] Setting ErrFile to fd 2...
	I0804 00:09:20.843160   64300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:09:20.843344   64300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:09:20.843551   64300 out.go:298] Setting JSON to false
	I0804 00:09:20.843623   64300 mustload.go:65] Loading cluster: default-k8s-diff-port-969068
	I0804 00:09:20.843929   64300 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:09:20.843997   64300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:09:20.844171   64300 mustload.go:65] Loading cluster: default-k8s-diff-port-969068
	I0804 00:09:20.844273   64300 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:09:20.844305   64300 stop.go:39] StopHost: default-k8s-diff-port-969068
	I0804 00:09:20.844675   64300 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:09:20.844723   64300 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:09:20.859266   64300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0804 00:09:20.859768   64300 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:09:20.860498   64300 main.go:141] libmachine: Using API Version  1
	I0804 00:09:20.860521   64300 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:09:20.860969   64300 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:09:20.863567   64300 out.go:177] * Stopping node "default-k8s-diff-port-969068"  ...
	I0804 00:09:20.865197   64300 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0804 00:09:20.865244   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:09:20.865536   64300 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0804 00:09:20.865575   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:09:20.868572   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:09:20.869034   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:07:46 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:09:20.869069   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:09:20.869212   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:09:20.869394   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:09:20.869595   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:09:20.869744   64300 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:09:20.969747   64300 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0804 00:09:21.029788   64300 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0804 00:09:21.092609   64300 main.go:141] libmachine: Stopping "default-k8s-diff-port-969068"...
	I0804 00:09:21.092642   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:09:21.094258   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Stop
	I0804 00:09:21.097891   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 0/120
	I0804 00:09:22.099342   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 1/120
	I0804 00:09:23.100577   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 2/120
	I0804 00:09:24.102089   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 3/120
	I0804 00:09:25.103579   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 4/120
	I0804 00:09:26.105889   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 5/120
	I0804 00:09:27.107872   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 6/120
	I0804 00:09:28.109166   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 7/120
	I0804 00:09:29.110649   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 8/120
	I0804 00:09:30.112138   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 9/120
	I0804 00:09:31.113588   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 10/120
	I0804 00:09:32.115161   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 11/120
	I0804 00:09:33.116535   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 12/120
	I0804 00:09:34.117935   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 13/120
	I0804 00:09:35.119329   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 14/120
	I0804 00:09:36.121641   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 15/120
	I0804 00:09:37.123068   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 16/120
	I0804 00:09:38.124798   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 17/120
	I0804 00:09:39.126389   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 18/120
	I0804 00:09:40.127687   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 19/120
	I0804 00:09:41.130207   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 20/120
	I0804 00:09:42.131660   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 21/120
	I0804 00:09:43.133119   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 22/120
	I0804 00:09:44.134509   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 23/120
	I0804 00:09:45.135884   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 24/120
	I0804 00:09:46.138045   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 25/120
	I0804 00:09:47.139414   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 26/120
	I0804 00:09:48.140792   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 27/120
	I0804 00:09:49.142148   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 28/120
	I0804 00:09:50.143687   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 29/120
	I0804 00:09:51.146017   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 30/120
	I0804 00:09:52.147459   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 31/120
	I0804 00:09:53.149160   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 32/120
	I0804 00:09:54.150916   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 33/120
	I0804 00:09:55.152354   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 34/120
	I0804 00:09:56.154477   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 35/120
	I0804 00:09:57.155816   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 36/120
	I0804 00:09:58.157509   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 37/120
	I0804 00:09:59.158873   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 38/120
	I0804 00:10:00.160221   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 39/120
	I0804 00:10:01.161717   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 40/120
	I0804 00:10:02.163016   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 41/120
	I0804 00:10:03.164696   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 42/120
	I0804 00:10:04.166102   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 43/120
	I0804 00:10:05.167970   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 44/120
	I0804 00:10:06.170155   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 45/120
	I0804 00:10:07.171704   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 46/120
	I0804 00:10:08.173055   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 47/120
	I0804 00:10:09.175053   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 48/120
	I0804 00:10:10.176421   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 49/120
	I0804 00:10:11.177824   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 50/120
	I0804 00:10:12.179851   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 51/120
	I0804 00:10:13.181561   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 52/120
	I0804 00:10:14.184089   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 53/120
	I0804 00:10:15.185504   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 54/120
	I0804 00:10:16.187565   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 55/120
	I0804 00:10:17.189009   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 56/120
	I0804 00:10:18.190398   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 57/120
	I0804 00:10:19.191851   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 58/120
	I0804 00:10:20.193366   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 59/120
	I0804 00:10:21.195635   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 60/120
	I0804 00:10:22.197135   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 61/120
	I0804 00:10:23.198649   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 62/120
	I0804 00:10:24.200163   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 63/120
	I0804 00:10:25.201872   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 64/120
	I0804 00:10:26.204145   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 65/120
	I0804 00:10:27.205920   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 66/120
	I0804 00:10:28.207372   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 67/120
	I0804 00:10:29.208847   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 68/120
	I0804 00:10:30.210489   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 69/120
	I0804 00:10:31.212866   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 70/120
	I0804 00:10:32.214271   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 71/120
	I0804 00:10:33.215641   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 72/120
	I0804 00:10:34.216976   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 73/120
	I0804 00:10:35.218416   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 74/120
	I0804 00:10:36.220562   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 75/120
	I0804 00:10:37.222153   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 76/120
	I0804 00:10:38.223433   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 77/120
	I0804 00:10:39.224690   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 78/120
	I0804 00:10:40.226060   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 79/120
	I0804 00:10:41.228301   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 80/120
	I0804 00:10:42.229649   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 81/120
	I0804 00:10:43.230960   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 82/120
	I0804 00:10:44.232290   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 83/120
	I0804 00:10:45.234062   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 84/120
	I0804 00:10:46.235913   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 85/120
	I0804 00:10:47.237385   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 86/120
	I0804 00:10:48.238602   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 87/120
	I0804 00:10:49.240245   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 88/120
	I0804 00:10:50.241697   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 89/120
	I0804 00:10:51.243950   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 90/120
	I0804 00:10:52.245303   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 91/120
	I0804 00:10:53.246660   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 92/120
	I0804 00:10:54.247966   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 93/120
	I0804 00:10:55.249335   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 94/120
	I0804 00:10:56.251253   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 95/120
	I0804 00:10:57.252546   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 96/120
	I0804 00:10:58.253952   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 97/120
	I0804 00:10:59.255525   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 98/120
	I0804 00:11:00.257051   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 99/120
	I0804 00:11:01.259303   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 100/120
	I0804 00:11:02.260651   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 101/120
	I0804 00:11:03.262337   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 102/120
	I0804 00:11:04.264102   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 103/120
	I0804 00:11:05.265789   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 104/120
	I0804 00:11:06.268102   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 105/120
	I0804 00:11:07.269783   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 106/120
	I0804 00:11:08.271516   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 107/120
	I0804 00:11:09.272988   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 108/120
	I0804 00:11:10.274519   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 109/120
	I0804 00:11:11.276030   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 110/120
	I0804 00:11:12.277403   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 111/120
	I0804 00:11:13.279039   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 112/120
	I0804 00:11:14.281126   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 113/120
	I0804 00:11:15.282943   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 114/120
	I0804 00:11:16.285200   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 115/120
	I0804 00:11:17.286715   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 116/120
	I0804 00:11:18.288192   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 117/120
	I0804 00:11:19.289606   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 118/120
	I0804 00:11:20.291061   64300 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for machine to stop 119/120
	I0804 00:11:21.291722   64300 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0804 00:11:21.291775   64300 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0804 00:11:21.293884   64300 out.go:177] 
	W0804 00:11:21.295617   64300 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0804 00:11:21.295638   64300 out.go:239] * 
	* 
	W0804 00:11:21.298268   64300 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:11:21.299824   64300 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-969068 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068: exit status 3 (18.635973434s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:11:39.937725   65202 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.132:22: connect: no route to host
	E0804 00:11:39.937747   65202 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.132:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-969068" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-877598 -n embed-certs-877598
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-877598 -n embed-certs-877598: exit status 3 (3.167726902s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:09:41.505685   64385 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host
	E0804 00:09:41.505705   64385 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-877598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-877598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15253962s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-877598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-877598 -n embed-certs-877598
E0804 00:09:50.666424   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-877598 -n embed-certs-877598: exit status 3 (3.063553224s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:09:50.721742   64466 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host
	E0804 00:09:50.721771   64466 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.140:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-877598" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (770.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-576210 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-576210 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m46.468273967s)

                                                
                                                
-- stdout --
	* [old-k8s-version-576210] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-576210" primary control-plane node in "old-k8s-version-576210" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-576210" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:10:11.883623   64758 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:10:11.883863   64758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:10:11.883871   64758 out.go:304] Setting ErrFile to fd 2...
	I0804 00:10:11.883875   64758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:10:11.884063   64758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:10:11.884585   64758 out.go:298] Setting JSON to false
	I0804 00:10:11.885570   64758 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6756,"bootTime":1722723456,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:10:11.885634   64758 start.go:139] virtualization: kvm guest
	I0804 00:10:11.887960   64758 out.go:177] * [old-k8s-version-576210] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:10:11.889529   64758 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:10:11.889555   64758 notify.go:220] Checking for updates...
	I0804 00:10:11.891969   64758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:10:11.893200   64758 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:10:11.894397   64758 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:10:11.895739   64758 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:10:11.897073   64758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:10:11.898814   64758 config.go:182] Loaded profile config "old-k8s-version-576210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:10:11.899428   64758 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:10:11.899516   64758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:10:11.914337   64758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46671
	I0804 00:10:11.914750   64758 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:10:11.915255   64758 main.go:141] libmachine: Using API Version  1
	I0804 00:10:11.915271   64758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:10:11.915537   64758 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:10:11.915698   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:10:11.917552   64758 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0804 00:10:11.919113   64758 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:10:11.919419   64758 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:10:11.919458   64758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:10:11.934239   64758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37783
	I0804 00:10:11.934681   64758 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:10:11.935119   64758 main.go:141] libmachine: Using API Version  1
	I0804 00:10:11.935142   64758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:10:11.935453   64758 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:10:11.935683   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:10:11.972506   64758 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:10:11.973632   64758 start.go:297] selected driver: kvm2
	I0804 00:10:11.973646   64758 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:10:11.973772   64758 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:10:11.974554   64758 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:10:11.974629   64758 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:10:11.989854   64758 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:10:11.990301   64758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:10:11.990343   64758 cni.go:84] Creating CNI manager for ""
	I0804 00:10:11.990355   64758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:10:11.990412   64758 start.go:340] cluster config:
	{Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:10:11.990549   64758 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:10:11.992491   64758 out.go:177] * Starting "old-k8s-version-576210" primary control-plane node in "old-k8s-version-576210" cluster
	I0804 00:10:11.993896   64758 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:10:11.993932   64758 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0804 00:10:11.993939   64758 cache.go:56] Caching tarball of preloaded images
	I0804 00:10:11.994032   64758 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:10:11.994047   64758 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0804 00:10:11.994158   64758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/config.json ...
	I0804 00:10:11.994355   64758 start.go:360] acquireMachinesLock for old-k8s-version-576210: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:14:28.318586   64758 start.go:364] duration metric: took 4m16.324186239s to acquireMachinesLock for "old-k8s-version-576210"
	I0804 00:14:28.318635   64758 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:28.318646   64758 fix.go:54] fixHost starting: 
	I0804 00:14:28.319092   64758 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:28.319128   64758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:28.334850   64758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0804 00:14:28.335321   64758 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:28.335817   64758 main.go:141] libmachine: Using API Version  1
	I0804 00:14:28.335848   64758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:28.336204   64758 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:28.336435   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:28.336622   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetState
	I0804 00:14:28.338146   64758 fix.go:112] recreateIfNeeded on old-k8s-version-576210: state=Stopped err=<nil>
	I0804 00:14:28.338166   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	W0804 00:14:28.338322   64758 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:28.340640   64758 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-576210" ...
	I0804 00:14:28.342217   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .Start
	I0804 00:14:28.342401   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring networks are active...
	I0804 00:14:28.343274   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network default is active
	I0804 00:14:28.343761   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network mk-old-k8s-version-576210 is active
	I0804 00:14:28.344268   64758 main.go:141] libmachine: (old-k8s-version-576210) Getting domain xml...
	I0804 00:14:28.345080   64758 main.go:141] libmachine: (old-k8s-version-576210) Creating domain...
	I0804 00:14:29.575420   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting to get IP...
	I0804 00:14:29.576307   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.576754   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.576842   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.576711   66003 retry.go:31] will retry after 272.821874ms: waiting for machine to come up
	I0804 00:14:29.851363   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.851951   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.851976   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.851895   66003 retry.go:31] will retry after 247.116514ms: waiting for machine to come up
	I0804 00:14:30.100479   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.100883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.100916   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.100833   66003 retry.go:31] will retry after 353.251065ms: waiting for machine to come up
	I0804 00:14:30.455526   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.455975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.456004   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.455933   66003 retry.go:31] will retry after 558.071575ms: waiting for machine to come up
	I0804 00:14:31.015539   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.015974   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.016000   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.015917   66003 retry.go:31] will retry after 514.757536ms: waiting for machine to come up
	I0804 00:14:31.532799   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.533232   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.533250   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.533186   66003 retry.go:31] will retry after 607.548546ms: waiting for machine to come up
	I0804 00:14:32.142162   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:32.142658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:32.142693   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:32.142610   66003 retry.go:31] will retry after 897.977595ms: waiting for machine to come up
	I0804 00:14:33.042628   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:33.043002   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:33.043028   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:33.042966   66003 retry.go:31] will retry after 1.094117762s: waiting for machine to come up
	I0804 00:14:34.138946   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:34.139459   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:34.139485   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:34.139414   66003 retry.go:31] will retry after 1.435055372s: waiting for machine to come up
	I0804 00:14:35.576253   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:35.576603   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:35.576625   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:35.576547   66003 retry.go:31] will retry after 1.688006591s: waiting for machine to come up
	I0804 00:14:37.265928   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:37.266429   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:37.266456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:37.266371   66003 retry.go:31] will retry after 2.356818801s: waiting for machine to come up
	I0804 00:14:39.624408   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:39.624832   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:39.624863   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:39.624775   66003 retry.go:31] will retry after 2.41856098s: waiting for machine to come up
	I0804 00:14:42.044498   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:42.044855   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:42.044882   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:42.044822   66003 retry.go:31] will retry after 3.111190148s: waiting for machine to come up
	I0804 00:14:45.158161   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.158688   64758 main.go:141] libmachine: (old-k8s-version-576210) Found IP for machine: 192.168.72.154
	I0804 00:14:45.158709   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserving static IP address...
	I0804 00:14:45.158719   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has current primary IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.159112   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.159138   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | skip adding static IP to network mk-old-k8s-version-576210 - found existing host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"}
	I0804 00:14:45.159151   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserved static IP address: 192.168.72.154
	I0804 00:14:45.159163   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting for SSH to be available...
	I0804 00:14:45.159172   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Getting to WaitForSSH function...
	I0804 00:14:45.161469   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161782   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.161812   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161936   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH client type: external
	I0804 00:14:45.161975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa (-rw-------)
	I0804 00:14:45.162015   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:14:45.162034   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | About to run SSH command:
	I0804 00:14:45.162044   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | exit 0
	I0804 00:14:45.281546   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | SSH cmd err, output: <nil>: 
	I0804 00:14:45.281859   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetConfigRaw
	I0804 00:14:45.282574   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.284998   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285386   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.285414   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285614   64758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/config.json ...
	I0804 00:14:45.285806   64758 machine.go:94] provisionDockerMachine start ...
	I0804 00:14:45.285823   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:45.286098   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.288285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288640   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.288668   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288753   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.288931   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289088   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289253   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.289426   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.289628   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.289640   64758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:14:45.386001   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:14:45.386036   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386325   64758 buildroot.go:166] provisioning hostname "old-k8s-version-576210"
	I0804 00:14:45.386348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386536   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.389316   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389718   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.389739   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389948   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.390122   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390285   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390415   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.390557   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.390758   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.390776   64758 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-576210 && echo "old-k8s-version-576210" | sudo tee /etc/hostname
	I0804 00:14:45.499644   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-576210
	
	I0804 00:14:45.499695   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.502583   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.502935   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.502959   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.503123   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.503318   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503456   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503570   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.503729   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.503898   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.503915   64758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-576210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-576210/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-576210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:14:45.606971   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:45.607003   64758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:14:45.607045   64758 buildroot.go:174] setting up certificates
	I0804 00:14:45.607053   64758 provision.go:84] configureAuth start
	I0804 00:14:45.607062   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.607327   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.610009   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610378   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.610407   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610545   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.612549   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.612876   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.612908   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.613071   64758 provision.go:143] copyHostCerts
	I0804 00:14:45.613134   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:14:45.613147   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:14:45.613231   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:14:45.613343   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:14:45.613368   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:14:45.613410   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:14:45.613491   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:14:45.613501   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:14:45.613535   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:14:45.613609   64758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-576210 san=[127.0.0.1 192.168.72.154 localhost minikube old-k8s-version-576210]
	I0804 00:14:45.794221   64758 provision.go:177] copyRemoteCerts
	I0804 00:14:45.794276   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:14:45.794299   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.796859   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797182   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.797225   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.797555   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.797687   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.797804   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:45.875704   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:14:45.903765   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0804 00:14:45.930101   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:14:45.955639   64758 provision.go:87] duration metric: took 348.556108ms to configureAuth
	I0804 00:14:45.955668   64758 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:14:45.955874   64758 config.go:182] Loaded profile config "old-k8s-version-576210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:14:45.955960   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.958487   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958835   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.958950   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958970   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.959193   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.959616   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.959789   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.959810   64758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:14:46.217683   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:14:46.217725   64758 machine.go:97] duration metric: took 931.901933ms to provisionDockerMachine
	I0804 00:14:46.217742   64758 start.go:293] postStartSetup for "old-k8s-version-576210" (driver="kvm2")
	I0804 00:14:46.217758   64758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:14:46.217787   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.218127   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:14:46.218151   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.220834   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221148   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.221170   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221342   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.221576   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.221733   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.221867   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.300102   64758 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:14:46.304434   64758 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:14:46.304464   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:14:46.304538   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:14:46.304631   64758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:14:46.304747   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:14:46.314378   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:46.339057   64758 start.go:296] duration metric: took 121.299069ms for postStartSetup
	I0804 00:14:46.339105   64758 fix.go:56] duration metric: took 18.020458894s for fixHost
	I0804 00:14:46.339129   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.341883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342258   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.342285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.342688   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342856   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342992   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.343161   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:46.343385   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:46.343400   64758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0804 00:14:46.442247   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730486.414818212
	
	I0804 00:14:46.442275   64758 fix.go:216] guest clock: 1722730486.414818212
	I0804 00:14:46.442288   64758 fix.go:229] Guest: 2024-08-04 00:14:46.414818212 +0000 UTC Remote: 2024-08-04 00:14:46.339109981 +0000 UTC m=+274.490542023 (delta=75.708231ms)
	I0804 00:14:46.442313   64758 fix.go:200] guest clock delta is within tolerance: 75.708231ms
	I0804 00:14:46.442319   64758 start.go:83] releasing machines lock for "old-k8s-version-576210", held for 18.123699316s
	I0804 00:14:46.442347   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.442656   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:46.445456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.445865   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.445892   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.446069   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446577   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446743   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446816   64758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:14:46.446850   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.446965   64758 ssh_runner.go:195] Run: cat /version.json
	I0804 00:14:46.446987   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.449576   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449794   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449953   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.449983   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450178   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450265   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.450317   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450384   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450520   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450605   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450667   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450733   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.450780   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450910   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.534686   64758 ssh_runner.go:195] Run: systemctl --version
	I0804 00:14:46.554270   64758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:14:46.708220   64758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:14:46.714541   64758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:14:46.714607   64758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:14:46.731642   64758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:14:46.731668   64758 start.go:495] detecting cgroup driver to use...
	I0804 00:14:46.731739   64758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:14:46.748782   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:14:46.763556   64758 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:14:46.763640   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:14:46.778075   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:14:46.793133   64758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:14:46.918377   64758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:14:47.059683   64758 docker.go:233] disabling docker service ...
	I0804 00:14:47.059753   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:14:47.074819   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:14:47.092184   64758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:14:47.235274   64758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:14:47.357937   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:14:47.375273   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:14:47.395182   64758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0804 00:14:47.395236   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.407036   64758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:14:47.407092   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.418562   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.434481   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.447488   64758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:14:47.460242   64758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:14:47.471089   64758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:14:47.471143   64758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:14:47.486698   64758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:14:47.498754   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:47.630867   64758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:14:47.796598   64758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:14:47.796690   64758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:14:47.802302   64758 start.go:563] Will wait 60s for crictl version
	I0804 00:14:47.802364   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:47.806368   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:14:47.847588   64758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:14:47.847679   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.877936   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.908229   64758 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0804 00:14:47.909635   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:47.912658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913102   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:47.913130   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913438   64758 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0804 00:14:47.917910   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:47.931201   64758 kubeadm.go:883] updating cluster {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:14:47.931318   64758 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:14:47.931381   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:47.980001   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:47.980071   64758 ssh_runner.go:195] Run: which lz4
	I0804 00:14:47.984277   64758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0804 00:14:47.988781   64758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:14:47.988810   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0804 00:14:49.706968   64758 crio.go:462] duration metric: took 1.722721175s to copy over tarball
	I0804 00:14:49.707059   64758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:14:52.511242   64758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.804147671s)
	I0804 00:14:52.511275   64758 crio.go:469] duration metric: took 2.804279705s to extract the tarball
	I0804 00:14:52.511285   64758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:14:52.553905   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:52.587405   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:52.587429   64758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:14:52.587496   64758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.587513   64758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.587550   64758 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.587551   64758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.587554   64758 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.587567   64758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.587570   64758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.587577   64758 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.589240   64758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.589239   64758 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.589247   64758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.589211   64758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.589287   64758 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589579   64758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.742969   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.766505   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.782813   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0804 00:14:52.788509   64758 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0804 00:14:52.788553   64758 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.788598   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.823108   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.829531   64758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0804 00:14:52.829577   64758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.829648   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.858209   64758 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0804 00:14:52.858238   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.858245   64758 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0804 00:14:52.858288   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.888665   64758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0804 00:14:52.888717   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.888748   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0804 00:14:52.888717   64758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.888794   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.918127   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.921386   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0804 00:14:52.929839   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.977866   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0804 00:14:52.977919   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.977960   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0804 00:14:52.994379   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.003198   64758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0804 00:14:53.003233   64758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.003273   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.056310   64758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0804 00:14:53.056338   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0804 00:14:53.056357   64758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.056403   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.062077   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.062119   64758 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0804 00:14:53.062161   64758 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.062206   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.064260   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.114709   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0804 00:14:53.114758   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.118375   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0804 00:14:53.147635   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0804 00:14:53.497155   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:53.647242   64758 cache_images.go:92] duration metric: took 1.059794593s to LoadCachedImages
	W0804 00:14:53.647353   64758 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0804 00:14:53.647370   64758 kubeadm.go:934] updating node { 192.168.72.154 8443 v1.20.0 crio true true} ...
	I0804 00:14:53.647507   64758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-576210 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:14:53.647586   64758 ssh_runner.go:195] Run: crio config
	I0804 00:14:53.710377   64758 cni.go:84] Creating CNI manager for ""
	I0804 00:14:53.710399   64758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:14:53.710411   64758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:14:53.710437   64758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.154 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-576210 NodeName:old-k8s-version-576210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0804 00:14:53.710583   64758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-576210"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:14:53.710661   64758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0804 00:14:53.721942   64758 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:14:53.722005   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:14:53.732623   64758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0804 00:14:53.749878   64758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:14:53.767147   64758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0804 00:14:53.785522   64758 ssh_runner.go:195] Run: grep 192.168.72.154	control-plane.minikube.internal$ /etc/hosts
	I0804 00:14:53.789438   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:53.802152   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:53.934508   64758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:14:53.952247   64758 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210 for IP: 192.168.72.154
	I0804 00:14:53.952280   64758 certs.go:194] generating shared ca certs ...
	I0804 00:14:53.952301   64758 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:53.952470   64758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:14:53.952523   64758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:14:53.952536   64758 certs.go:256] generating profile certs ...
	I0804 00:14:53.952658   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.key
	I0804 00:14:53.952730   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key.5357f842
	I0804 00:14:53.952783   64758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key
	I0804 00:14:53.952948   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:14:53.953000   64758 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:14:53.953013   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:14:53.953048   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:14:53.953084   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:14:53.953114   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:14:53.953191   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:53.954013   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:14:54.001446   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:14:54.029628   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:14:54.062713   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:14:54.090711   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0804 00:14:54.117970   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:14:54.163691   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:14:54.190151   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:14:54.219334   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:14:54.244677   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:14:54.269795   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:14:54.294949   64758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:14:54.312330   64758 ssh_runner.go:195] Run: openssl version
	I0804 00:14:54.318320   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:14:54.328932   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333686   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333737   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.341330   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:14:54.356008   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:14:54.368966   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373896   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373954   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.379770   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:14:54.390903   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:14:54.402637   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407296   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407362   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.413215   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:14:54.424473   64758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:14:54.429673   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:14:54.436038   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:14:54.442091   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:14:54.448507   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:14:54.455421   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:14:54.461969   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:14:54.468042   64758 kubeadm.go:392] StartCluster: {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:14:54.468151   64758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:14:54.468208   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.508109   64758 cri.go:89] found id: ""
	I0804 00:14:54.508183   64758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:14:54.518712   64758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:14:54.518736   64758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:14:54.518788   64758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:14:54.528545   64758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:14:54.529780   64758 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-576210" does not appear in /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:14:54.530411   64758 kubeconfig.go:62] /home/jenkins/minikube-integration/19364-9607/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-576210" cluster setting kubeconfig missing "old-k8s-version-576210" context setting]
	I0804 00:14:54.531316   64758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:54.550431   64758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:14:54.561047   64758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.154
	I0804 00:14:54.561086   64758 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:14:54.561108   64758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:14:54.561163   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.597213   64758 cri.go:89] found id: ""
	I0804 00:14:54.597282   64758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:14:54.612914   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:14:54.622533   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:14:54.622562   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:14:54.622613   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:14:54.632746   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:14:54.632812   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:14:54.642197   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:14:54.651204   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:14:54.651268   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:14:54.660496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.669448   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:14:54.669512   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.678773   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:14:54.687854   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:14:54.687902   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:14:54.697066   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:14:54.707036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:54.840553   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.551919   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.790500   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.898210   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.995621   64758 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:14:55.995711   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:56.496072   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:56.995965   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.496285   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.995805   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.496549   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.996224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.496360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.996056   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:01.496435   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:01.996148   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.496756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.996430   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.496646   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.996707   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.496772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.995997   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.496651   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.996384   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.496403   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.995779   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.495822   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.995970   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.495870   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.996379   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.495852   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.495912   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.996591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:11.495964   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:11.996494   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.496005   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.996429   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.496310   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.996525   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.495995   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.996172   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.495809   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.996016   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:16.496210   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:16.996765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.496069   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.995828   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.495847   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.996276   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.496155   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.996708   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.996145   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:21.496193   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:21.996520   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.495922   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.995766   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.495923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.995770   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.496788   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.996759   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.996017   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.496445   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.996399   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.496810   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.995825   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.496395   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.996561   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.496735   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.996542   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.496406   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.996259   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.496307   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.996780   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.496164   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.996444   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.496838   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.996533   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.496300   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.996772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.495937   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.996834   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.496277   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.996761   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.495885   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.995785   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.496550   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.996645   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.995851   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.496685   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.995896   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:41.495864   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:41.995808   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.496612   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.996566   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.495812   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.996095   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.495902   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.996724   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.495854   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.996354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:46.496185   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:46.996215   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.496634   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.996278   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.496184   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.996616   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.496240   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.996433   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.996600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:51.496459   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:51.996447   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.496265   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.996030   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.996313   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.495823   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.996360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.496652   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.996049   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:55.996141   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:56.045001   64758 cri.go:89] found id: ""
	I0804 00:15:56.045031   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.045042   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:56.045049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:56.045114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:56.086505   64758 cri.go:89] found id: ""
	I0804 00:15:56.086535   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.086547   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:56.086554   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:56.086618   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:56.125953   64758 cri.go:89] found id: ""
	I0804 00:15:56.125983   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.125994   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:56.126001   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:56.126060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:56.167313   64758 cri.go:89] found id: ""
	I0804 00:15:56.167343   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.167354   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:56.167361   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:56.167424   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:56.211102   64758 cri.go:89] found id: ""
	I0804 00:15:56.211132   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.211142   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:56.211149   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:56.211231   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:56.246894   64758 cri.go:89] found id: ""
	I0804 00:15:56.246926   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.246937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:56.246945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:56.247012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:56.281952   64758 cri.go:89] found id: ""
	I0804 00:15:56.281980   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.281991   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:56.281998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:56.282060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:56.317685   64758 cri.go:89] found id: ""
	I0804 00:15:56.317719   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.317733   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:56.317745   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:56.317762   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:56.335040   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:56.335069   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:56.475995   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:56.476017   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:56.476033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:56.567508   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:56.567551   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:56.618136   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:56.618166   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:59.172886   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:59.187045   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:59.187128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:59.225135   64758 cri.go:89] found id: ""
	I0804 00:15:59.225164   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.225173   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:59.225179   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:59.225255   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:59.262538   64758 cri.go:89] found id: ""
	I0804 00:15:59.262566   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.262573   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:59.262578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:59.262635   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:59.301665   64758 cri.go:89] found id: ""
	I0804 00:15:59.301697   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.301708   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:59.301715   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:59.301778   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:59.362742   64758 cri.go:89] found id: ""
	I0804 00:15:59.362766   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.362774   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:59.362779   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:59.362834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:59.404398   64758 cri.go:89] found id: ""
	I0804 00:15:59.404431   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.404509   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:59.404525   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:59.404594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:59.454257   64758 cri.go:89] found id: ""
	I0804 00:15:59.454285   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.454297   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:59.454305   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:59.454363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:59.496790   64758 cri.go:89] found id: ""
	I0804 00:15:59.496818   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.496829   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:59.496837   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:59.496896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:59.537395   64758 cri.go:89] found id: ""
	I0804 00:15:59.537424   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.537431   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:59.537439   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:59.537453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:59.600005   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:59.600042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:59.617304   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:59.617336   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:59.692828   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:59.692849   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:59.692864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:59.764000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:59.764038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:02.307325   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:02.324168   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:02.324233   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:02.370204   64758 cri.go:89] found id: ""
	I0804 00:16:02.370234   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.370250   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:02.370258   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:02.370325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:02.405586   64758 cri.go:89] found id: ""
	I0804 00:16:02.405616   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.405628   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:02.405636   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:02.405694   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:02.445644   64758 cri.go:89] found id: ""
	I0804 00:16:02.445665   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.445675   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:02.445682   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:02.445739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:02.483659   64758 cri.go:89] found id: ""
	I0804 00:16:02.483686   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.483695   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:02.483701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:02.483751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:02.519903   64758 cri.go:89] found id: ""
	I0804 00:16:02.519929   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.519938   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:02.519944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:02.519991   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:02.557373   64758 cri.go:89] found id: ""
	I0804 00:16:02.557401   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.557410   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:02.557416   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:02.557472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:02.594203   64758 cri.go:89] found id: ""
	I0804 00:16:02.594238   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.594249   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:02.594256   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:02.594316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:02.635487   64758 cri.go:89] found id: ""
	I0804 00:16:02.635512   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.635520   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:02.635529   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:02.635543   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:02.686990   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:02.687035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:02.701784   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:02.701810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:02.778626   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:02.778648   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:02.778662   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:02.856056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:02.856097   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:05.402858   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:05.418825   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:05.418900   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:05.458789   64758 cri.go:89] found id: ""
	I0804 00:16:05.458872   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.458887   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:05.458895   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:05.458967   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:05.498258   64758 cri.go:89] found id: ""
	I0804 00:16:05.498284   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.498295   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:05.498302   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:05.498364   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:05.540892   64758 cri.go:89] found id: ""
	I0804 00:16:05.540919   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.540927   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:05.540933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:05.540992   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:05.578876   64758 cri.go:89] found id: ""
	I0804 00:16:05.578911   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.578919   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:05.578924   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:05.578971   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:05.616248   64758 cri.go:89] found id: ""
	I0804 00:16:05.616272   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.616280   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:05.616285   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:05.616339   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:05.654387   64758 cri.go:89] found id: ""
	I0804 00:16:05.654419   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.654428   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:05.654436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:05.654528   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:05.695579   64758 cri.go:89] found id: ""
	I0804 00:16:05.695613   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.695625   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:05.695669   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:05.695752   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:05.740754   64758 cri.go:89] found id: ""
	I0804 00:16:05.740777   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.740785   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:05.740793   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:05.740805   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:05.792091   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:05.792126   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:05.809130   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:05.809164   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:05.888441   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:05.888465   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:05.888479   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:05.969336   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:05.969390   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:08.514981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:08.531117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:08.531188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:08.569167   64758 cri.go:89] found id: ""
	I0804 00:16:08.569199   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.569210   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:08.569218   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:08.569282   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:08.608478   64758 cri.go:89] found id: ""
	I0804 00:16:08.608559   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.608572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:08.608580   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:08.608636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:08.645939   64758 cri.go:89] found id: ""
	I0804 00:16:08.645972   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.645983   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:08.645990   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:08.646050   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:08.685274   64758 cri.go:89] found id: ""
	I0804 00:16:08.685305   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.685316   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:08.685324   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:08.685400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:08.722314   64758 cri.go:89] found id: ""
	I0804 00:16:08.722345   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.722357   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:08.722363   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:08.722427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:08.758577   64758 cri.go:89] found id: ""
	I0804 00:16:08.758606   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.758617   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:08.758624   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:08.758685   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.798734   64758 cri.go:89] found id: ""
	I0804 00:16:08.798761   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.798773   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:08.798781   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:08.798842   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:08.837577   64758 cri.go:89] found id: ""
	I0804 00:16:08.837600   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.837608   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:08.837616   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:08.837627   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:08.894426   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:08.894465   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:08.909851   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:08.909879   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:08.989858   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:08.989878   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:08.989893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:09.081056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:09.081098   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:11.627914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:11.641805   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:11.641896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:11.679002   64758 cri.go:89] found id: ""
	I0804 00:16:11.679028   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.679036   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:11.679042   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:11.679090   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:11.720188   64758 cri.go:89] found id: ""
	I0804 00:16:11.720220   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.720236   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:11.720245   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:11.720307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:11.760085   64758 cri.go:89] found id: ""
	I0804 00:16:11.760118   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.760130   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:11.760138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:11.760198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:11.796220   64758 cri.go:89] found id: ""
	I0804 00:16:11.796249   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.796266   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:11.796274   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:11.796335   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:11.834216   64758 cri.go:89] found id: ""
	I0804 00:16:11.834243   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.834253   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:11.834260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:11.834336   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:11.869205   64758 cri.go:89] found id: ""
	I0804 00:16:11.869230   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.869237   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:11.869243   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:11.869301   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:11.912091   64758 cri.go:89] found id: ""
	I0804 00:16:11.912120   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.912132   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:11.912145   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:11.912203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:11.949570   64758 cri.go:89] found id: ""
	I0804 00:16:11.949603   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.949614   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:11.949625   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:11.949643   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:12.006542   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:12.006575   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:12.022435   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:12.022474   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:12.101007   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:12.101032   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:12.101057   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:12.183836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:12.183876   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:14.725345   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:14.738389   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:14.738464   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:14.780103   64758 cri.go:89] found id: ""
	I0804 00:16:14.780133   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.780142   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:14.780147   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:14.780197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:14.817811   64758 cri.go:89] found id: ""
	I0804 00:16:14.817847   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.817863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:14.817872   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:14.817946   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:14.854450   64758 cri.go:89] found id: ""
	I0804 00:16:14.854478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.854488   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:14.854495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:14.854561   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:14.891862   64758 cri.go:89] found id: ""
	I0804 00:16:14.891891   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.891900   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:14.891905   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:14.891958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:14.928450   64758 cri.go:89] found id: ""
	I0804 00:16:14.928478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.928488   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:14.928495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:14.928554   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:14.965820   64758 cri.go:89] found id: ""
	I0804 00:16:14.965848   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.965860   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:14.965867   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:14.965945   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:15.008725   64758 cri.go:89] found id: ""
	I0804 00:16:15.008874   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.008888   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:15.008897   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:15.008957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:15.044618   64758 cri.go:89] found id: ""
	I0804 00:16:15.044768   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.044792   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:15.044802   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:15.044815   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:15.102786   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:15.102825   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:15.118305   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:15.118347   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:15.196397   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:15.196420   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:15.196435   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:15.277941   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:15.277986   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:17.819354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:17.834271   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:17.834332   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:17.870930   64758 cri.go:89] found id: ""
	I0804 00:16:17.870961   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.870973   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:17.870980   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:17.871040   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:17.907980   64758 cri.go:89] found id: ""
	I0804 00:16:17.908007   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.908016   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:17.908021   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:17.908067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:17.943257   64758 cri.go:89] found id: ""
	I0804 00:16:17.943284   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.943295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:17.943301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:17.943363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:17.982297   64758 cri.go:89] found id: ""
	I0804 00:16:17.982328   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.982338   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:17.982345   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:17.982405   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:18.022780   64758 cri.go:89] found id: ""
	I0804 00:16:18.022810   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.022841   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:18.022850   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:18.022913   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:18.061891   64758 cri.go:89] found id: ""
	I0804 00:16:18.061926   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.061937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:18.061945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:18.062012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:18.100807   64758 cri.go:89] found id: ""
	I0804 00:16:18.100845   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.100855   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:18.100862   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:18.100917   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:18.142011   64758 cri.go:89] found id: ""
	I0804 00:16:18.142044   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.142056   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:18.142066   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:18.142090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:18.195476   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:18.195511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:18.209661   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:18.209690   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:18.282638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:18.282657   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:18.282669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:18.363900   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:18.363938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:20.908753   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:20.922878   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:20.922962   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:20.961013   64758 cri.go:89] found id: ""
	I0804 00:16:20.961041   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.961052   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:20.961058   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:20.961109   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:20.998027   64758 cri.go:89] found id: ""
	I0804 00:16:20.998059   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.998068   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:20.998074   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:20.998121   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:21.035640   64758 cri.go:89] found id: ""
	I0804 00:16:21.035669   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.035680   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:21.035688   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:21.035751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:21.075737   64758 cri.go:89] found id: ""
	I0804 00:16:21.075770   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.075779   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:21.075786   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:21.075846   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:21.120024   64758 cri.go:89] found id: ""
	I0804 00:16:21.120046   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.120054   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:21.120061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:21.120126   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:21.160796   64758 cri.go:89] found id: ""
	I0804 00:16:21.160821   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.160840   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:21.160847   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:21.160907   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:21.195519   64758 cri.go:89] found id: ""
	I0804 00:16:21.195547   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.195558   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:21.195566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:21.195629   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:21.236193   64758 cri.go:89] found id: ""
	I0804 00:16:21.236222   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.236232   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:21.236243   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:21.236258   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:21.295154   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:21.295198   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:21.309540   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:21.309566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:21.389391   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:21.389416   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:21.389433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:21.472771   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:21.472808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:24.018923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:24.032954   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:24.033013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:24.073677   64758 cri.go:89] found id: ""
	I0804 00:16:24.073703   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.073711   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:24.073716   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:24.073777   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:24.115752   64758 cri.go:89] found id: ""
	I0804 00:16:24.115775   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.115785   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:24.115792   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:24.115849   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:24.152967   64758 cri.go:89] found id: ""
	I0804 00:16:24.153001   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.153017   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:24.153024   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:24.153098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:24.190557   64758 cri.go:89] found id: ""
	I0804 00:16:24.190581   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.190589   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:24.190595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:24.190643   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:24.229312   64758 cri.go:89] found id: ""
	I0804 00:16:24.229341   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.229351   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:24.229373   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:24.229437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:24.265076   64758 cri.go:89] found id: ""
	I0804 00:16:24.265100   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.265107   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:24.265113   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:24.265167   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:24.306508   64758 cri.go:89] found id: ""
	I0804 00:16:24.306534   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.306542   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:24.306547   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:24.306598   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:24.350714   64758 cri.go:89] found id: ""
	I0804 00:16:24.350747   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.350759   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:24.350770   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:24.350785   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:24.366188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:24.366216   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:24.438410   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:24.438431   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:24.438447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:24.522635   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:24.522669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:24.562647   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:24.562678   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:27.119437   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:27.133330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:27.133426   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:27.170001   64758 cri.go:89] found id: ""
	I0804 00:16:27.170039   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.170048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:27.170054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:27.170112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:27.205811   64758 cri.go:89] found id: ""
	I0804 00:16:27.205843   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.205854   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:27.205861   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:27.205922   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:27.247249   64758 cri.go:89] found id: ""
	I0804 00:16:27.247278   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.247287   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:27.247294   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:27.247360   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:27.285659   64758 cri.go:89] found id: ""
	I0804 00:16:27.285688   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.285697   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:27.285703   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:27.285774   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:27.321039   64758 cri.go:89] found id: ""
	I0804 00:16:27.321066   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.321075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:27.321084   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:27.321130   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:27.359947   64758 cri.go:89] found id: ""
	I0804 00:16:27.359977   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.359988   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:27.359996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:27.360056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:27.401408   64758 cri.go:89] found id: ""
	I0804 00:16:27.401432   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.401440   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:27.401449   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:27.401495   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:27.437297   64758 cri.go:89] found id: ""
	I0804 00:16:27.437326   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.437337   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:27.437347   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:27.437373   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:27.490594   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:27.490639   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:27.505993   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:27.506021   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:27.588779   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:27.588804   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:27.588820   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:27.681557   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:27.681592   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.225062   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:30.239475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:30.239540   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:30.283896   64758 cri.go:89] found id: ""
	I0804 00:16:30.283923   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.283931   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:30.283938   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:30.284013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:30.321506   64758 cri.go:89] found id: ""
	I0804 00:16:30.321532   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.321539   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:30.321545   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:30.321593   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:30.358314   64758 cri.go:89] found id: ""
	I0804 00:16:30.358340   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.358347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:30.358353   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:30.358400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:30.393561   64758 cri.go:89] found id: ""
	I0804 00:16:30.393587   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.393595   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:30.393600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:30.393646   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:30.429907   64758 cri.go:89] found id: ""
	I0804 00:16:30.429935   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.429943   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:30.429949   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:30.430008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:30.466305   64758 cri.go:89] found id: ""
	I0804 00:16:30.466332   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.466342   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:30.466350   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:30.466408   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:30.505384   64758 cri.go:89] found id: ""
	I0804 00:16:30.505413   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.505424   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:30.505431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:30.505492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:30.541756   64758 cri.go:89] found id: ""
	I0804 00:16:30.541786   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.541796   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:30.541806   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:30.541821   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:30.555516   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:30.555554   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:30.627442   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:30.627463   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:30.627473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:30.701452   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:30.701489   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.743436   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:30.743473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:33.298898   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:33.315211   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:33.315292   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:33.353171   64758 cri.go:89] found id: ""
	I0804 00:16:33.353207   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.353220   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:33.353229   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:33.353297   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:33.389767   64758 cri.go:89] found id: ""
	I0804 00:16:33.389792   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.389799   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:33.389805   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:33.389851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:33.446889   64758 cri.go:89] found id: ""
	I0804 00:16:33.446928   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.446939   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:33.446946   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:33.447004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:33.487340   64758 cri.go:89] found id: ""
	I0804 00:16:33.487362   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.487370   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:33.487376   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:33.487423   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:33.530398   64758 cri.go:89] found id: ""
	I0804 00:16:33.530421   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.530429   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:33.530435   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:33.530483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:33.568725   64758 cri.go:89] found id: ""
	I0804 00:16:33.568753   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.568762   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:33.568769   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:33.568818   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:33.607205   64758 cri.go:89] found id: ""
	I0804 00:16:33.607232   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.607242   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:33.607249   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:33.607311   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:33.648188   64758 cri.go:89] found id: ""
	I0804 00:16:33.648220   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.648230   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:33.648240   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:33.648256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:33.700231   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:33.700266   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:33.714899   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:33.714932   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:33.794306   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:33.794326   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:33.794340   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.872446   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:33.872482   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.415000   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:36.428920   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:36.428996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:36.464784   64758 cri.go:89] found id: ""
	I0804 00:16:36.464810   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.464817   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:36.464823   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:36.464925   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:36.501394   64758 cri.go:89] found id: ""
	I0804 00:16:36.501423   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.501431   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:36.501437   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:36.501497   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:36.537049   64758 cri.go:89] found id: ""
	I0804 00:16:36.537079   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.537090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:36.537102   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:36.537173   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:36.573956   64758 cri.go:89] found id: ""
	I0804 00:16:36.573986   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.573997   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:36.574004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:36.574065   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:36.612996   64758 cri.go:89] found id: ""
	I0804 00:16:36.613016   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.613023   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:36.613029   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:36.613083   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:36.652346   64758 cri.go:89] found id: ""
	I0804 00:16:36.652367   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.652374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:36.652380   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:36.652437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:36.690073   64758 cri.go:89] found id: ""
	I0804 00:16:36.690100   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.690110   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:36.690119   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:36.690182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:36.732436   64758 cri.go:89] found id: ""
	I0804 00:16:36.732466   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.732477   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:36.732487   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:36.732505   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:36.746036   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:36.746060   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:36.818141   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:36.818164   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:36.818179   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:36.907689   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:36.907732   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.947104   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:36.947135   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.502960   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:39.516340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:39.516414   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:39.555903   64758 cri.go:89] found id: ""
	I0804 00:16:39.555929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.555939   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:39.555946   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:39.556004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:39.599791   64758 cri.go:89] found id: ""
	I0804 00:16:39.599816   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.599827   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:39.599834   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:39.599894   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:39.642903   64758 cri.go:89] found id: ""
	I0804 00:16:39.642929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.642936   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:39.642944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:39.643004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:39.678667   64758 cri.go:89] found id: ""
	I0804 00:16:39.678693   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.678702   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:39.678709   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:39.678757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:39.716888   64758 cri.go:89] found id: ""
	I0804 00:16:39.716916   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.716926   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:39.716933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:39.717001   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:39.751576   64758 cri.go:89] found id: ""
	I0804 00:16:39.751602   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.751610   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:39.751616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:39.751664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:39.794026   64758 cri.go:89] found id: ""
	I0804 00:16:39.794056   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.794067   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:39.794087   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:39.794158   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:39.841426   64758 cri.go:89] found id: ""
	I0804 00:16:39.841454   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.841464   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:39.841474   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:39.841492   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.902579   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:39.902616   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:39.924467   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:39.924495   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:40.001318   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:40.001345   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:40.001377   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:40.081520   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:40.081552   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:42.623094   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:42.636523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:42.636594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:42.674188   64758 cri.go:89] found id: ""
	I0804 00:16:42.674218   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.674226   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:42.674231   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:42.674277   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:42.708496   64758 cri.go:89] found id: ""
	I0804 00:16:42.708522   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.708532   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:42.708539   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:42.708601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:42.751050   64758 cri.go:89] found id: ""
	I0804 00:16:42.751087   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.751100   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:42.751107   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:42.751170   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:42.788520   64758 cri.go:89] found id: ""
	I0804 00:16:42.788546   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.788555   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:42.788560   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:42.788619   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:42.828273   64758 cri.go:89] found id: ""
	I0804 00:16:42.828297   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.828304   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:42.828309   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:42.828356   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:42.867754   64758 cri.go:89] found id: ""
	I0804 00:16:42.867784   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.867799   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:42.867807   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:42.867864   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:42.903945   64758 cri.go:89] found id: ""
	I0804 00:16:42.903977   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.903988   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:42.903996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:42.904059   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:42.942477   64758 cri.go:89] found id: ""
	I0804 00:16:42.942518   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.942539   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:42.942549   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:42.942565   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:42.981776   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:42.981810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:43.037601   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:43.037634   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:43.052719   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:43.052746   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:43.122664   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:43.122688   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:43.122702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:45.701275   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:45.714532   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:45.714607   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:45.750932   64758 cri.go:89] found id: ""
	I0804 00:16:45.750955   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.750986   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:45.750991   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:45.751042   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:45.787348   64758 cri.go:89] found id: ""
	I0804 00:16:45.787373   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.787381   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:45.787387   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:45.787441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:45.823390   64758 cri.go:89] found id: ""
	I0804 00:16:45.823419   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.823429   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:45.823436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:45.823498   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:45.861400   64758 cri.go:89] found id: ""
	I0804 00:16:45.861430   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.861440   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:45.861448   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:45.861508   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:45.898992   64758 cri.go:89] found id: ""
	I0804 00:16:45.899024   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.899036   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:45.899043   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:45.899110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:45.934542   64758 cri.go:89] found id: ""
	I0804 00:16:45.934570   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.934582   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:45.934589   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:45.934648   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:45.967908   64758 cri.go:89] found id: ""
	I0804 00:16:45.967938   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.967949   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:45.967957   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:45.968018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:46.006475   64758 cri.go:89] found id: ""
	I0804 00:16:46.006504   64758 logs.go:276] 0 containers: []
	W0804 00:16:46.006516   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:46.006526   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:46.006541   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:46.058760   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:46.058793   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:46.074753   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:46.074777   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:46.149634   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:46.149655   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:46.149671   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:46.230104   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:46.230140   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:48.772224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:48.785848   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:48.785935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.825206   64758 cri.go:89] found id: ""
	I0804 00:16:48.825232   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.825242   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:48.825249   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:48.825315   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:48.861559   64758 cri.go:89] found id: ""
	I0804 00:16:48.861588   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.861599   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:48.861607   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:48.861675   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:48.903375   64758 cri.go:89] found id: ""
	I0804 00:16:48.903401   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.903412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:48.903419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:48.903480   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:48.940708   64758 cri.go:89] found id: ""
	I0804 00:16:48.940736   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.940748   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:48.940755   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:48.940817   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:48.976190   64758 cri.go:89] found id: ""
	I0804 00:16:48.976218   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.976228   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:48.976236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:48.976291   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:49.010393   64758 cri.go:89] found id: ""
	I0804 00:16:49.010423   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.010434   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:49.010442   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:49.010506   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:49.046670   64758 cri.go:89] found id: ""
	I0804 00:16:49.046698   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.046707   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:49.046711   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:49.046759   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:49.085254   64758 cri.go:89] found id: ""
	I0804 00:16:49.085284   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.085293   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:49.085302   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:49.085314   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:49.142402   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:49.142433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:49.157063   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:49.157092   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:49.233808   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:49.233829   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:49.233841   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:49.320355   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:49.320395   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:51.862548   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:51.875679   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:51.875750   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:51.911400   64758 cri.go:89] found id: ""
	I0804 00:16:51.911427   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.911437   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:51.911444   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:51.911505   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:51.948825   64758 cri.go:89] found id: ""
	I0804 00:16:51.948853   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.948863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:51.948870   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:51.948935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:51.989458   64758 cri.go:89] found id: ""
	I0804 00:16:51.989488   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.989499   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:51.989506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:51.989568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:52.026663   64758 cri.go:89] found id: ""
	I0804 00:16:52.026685   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.026693   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:52.026698   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:52.026754   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:52.066089   64758 cri.go:89] found id: ""
	I0804 00:16:52.066115   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.066127   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:52.066135   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:52.066198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:52.102159   64758 cri.go:89] found id: ""
	I0804 00:16:52.102185   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.102196   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:52.102203   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:52.102258   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:52.144239   64758 cri.go:89] found id: ""
	I0804 00:16:52.144266   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.144276   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:52.144283   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:52.144344   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:52.180679   64758 cri.go:89] found id: ""
	I0804 00:16:52.180708   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.180717   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:52.180725   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:52.180738   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:52.262074   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:52.262116   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.305913   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:52.305948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:52.357044   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:52.357081   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:52.372090   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:52.372119   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:52.444148   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:54.944910   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:54.958182   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:54.958239   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:54.993629   64758 cri.go:89] found id: ""
	I0804 00:16:54.993657   64758 logs.go:276] 0 containers: []
	W0804 00:16:54.993668   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:54.993675   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:54.993734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:55.029270   64758 cri.go:89] found id: ""
	I0804 00:16:55.029299   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.029310   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:55.029317   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:55.029393   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:55.067923   64758 cri.go:89] found id: ""
	I0804 00:16:55.067951   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.067961   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:55.067968   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:55.068027   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:55.107533   64758 cri.go:89] found id: ""
	I0804 00:16:55.107556   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.107565   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:55.107572   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:55.107633   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:55.143828   64758 cri.go:89] found id: ""
	I0804 00:16:55.143856   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.143868   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:55.143875   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:55.143940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:55.177960   64758 cri.go:89] found id: ""
	I0804 00:16:55.178015   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.178030   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:55.178038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:55.178112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:55.217457   64758 cri.go:89] found id: ""
	I0804 00:16:55.217481   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.217488   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:55.217494   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:55.217538   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:55.259862   64758 cri.go:89] found id: ""
	I0804 00:16:55.259890   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.259898   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:55.259907   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:55.259918   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:55.311566   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:55.311598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:55.327833   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:55.327866   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:55.406475   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:55.406495   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:55.406511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:55.484586   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:55.484618   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.028251   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:58.042169   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:58.042236   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:58.076836   64758 cri.go:89] found id: ""
	I0804 00:16:58.076859   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.076868   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:58.076873   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:58.076937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:58.115989   64758 cri.go:89] found id: ""
	I0804 00:16:58.116019   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.116031   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:58.116037   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:58.116099   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:58.155049   64758 cri.go:89] found id: ""
	I0804 00:16:58.155079   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.155090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:58.155097   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:58.155160   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:58.190257   64758 cri.go:89] found id: ""
	I0804 00:16:58.190293   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.190305   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:58.190315   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:58.190370   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:58.225001   64758 cri.go:89] found id: ""
	I0804 00:16:58.225029   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.225038   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:58.225061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:58.225118   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:58.268881   64758 cri.go:89] found id: ""
	I0804 00:16:58.268925   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.268937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:58.268945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:58.269010   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:58.305223   64758 cri.go:89] found id: ""
	I0804 00:16:58.305253   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.305269   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:58.305277   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:58.305340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:58.340517   64758 cri.go:89] found id: ""
	I0804 00:16:58.340548   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.340559   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:58.340570   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:58.340584   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:58.355372   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:58.355403   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:58.426292   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:58.426312   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:58.426326   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:58.509990   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:58.510034   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.550957   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:58.550988   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.104806   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:01.119379   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:01.119453   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:01.158376   64758 cri.go:89] found id: ""
	I0804 00:17:01.158407   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.158419   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:01.158426   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:01.158484   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:01.193826   64758 cri.go:89] found id: ""
	I0804 00:17:01.193858   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.193869   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:01.193876   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:01.193937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:01.228566   64758 cri.go:89] found id: ""
	I0804 00:17:01.228588   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.228600   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:01.228607   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:01.228667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:01.265736   64758 cri.go:89] found id: ""
	I0804 00:17:01.265762   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.265772   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:01.265778   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:01.265834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:01.302655   64758 cri.go:89] found id: ""
	I0804 00:17:01.302679   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.302694   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:01.302699   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:01.302753   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:01.340191   64758 cri.go:89] found id: ""
	I0804 00:17:01.340218   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.340226   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:01.340236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:01.340294   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:01.375767   64758 cri.go:89] found id: ""
	I0804 00:17:01.375789   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.375797   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:01.375802   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:01.375875   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:01.412446   64758 cri.go:89] found id: ""
	I0804 00:17:01.412479   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.412490   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:01.412502   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:01.412518   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.466271   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:01.466309   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:01.480800   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:01.480838   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:01.547909   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:01.547932   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:01.547948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:01.628318   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:01.628351   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:04.175883   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:04.189038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:04.189098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:04.229126   64758 cri.go:89] found id: ""
	I0804 00:17:04.229158   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.229167   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:04.229174   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:04.229235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:04.264107   64758 cri.go:89] found id: ""
	I0804 00:17:04.264134   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.264142   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:04.264147   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:04.264203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:04.299959   64758 cri.go:89] found id: ""
	I0804 00:17:04.299996   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.300004   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:04.300010   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:04.300056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:04.337978   64758 cri.go:89] found id: ""
	I0804 00:17:04.338006   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.338016   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:04.338023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:04.338081   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:04.377969   64758 cri.go:89] found id: ""
	I0804 00:17:04.377993   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.378001   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:04.378006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:04.378068   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:04.413036   64758 cri.go:89] found id: ""
	I0804 00:17:04.413062   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.413071   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:04.413078   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:04.413140   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:04.450387   64758 cri.go:89] found id: ""
	I0804 00:17:04.450417   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.450426   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:04.450431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:04.450488   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:04.490132   64758 cri.go:89] found id: ""
	I0804 00:17:04.490165   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.490177   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:04.490188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:04.490204   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:04.560633   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:04.560653   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:04.560668   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:04.639409   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:04.639445   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:04.682479   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:04.682512   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:04.734823   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:04.734857   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:07.250174   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:07.263523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:07.263599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:07.300095   64758 cri.go:89] found id: ""
	I0804 00:17:07.300124   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.300136   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:07.300144   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:07.300211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:07.337798   64758 cri.go:89] found id: ""
	I0804 00:17:07.337824   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.337846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:07.337851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:07.337902   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:07.375305   64758 cri.go:89] found id: ""
	I0804 00:17:07.375337   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.375348   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:07.375356   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:07.375406   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:07.411603   64758 cri.go:89] found id: ""
	I0804 00:17:07.411629   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.411639   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:07.411646   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:07.411704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:07.450478   64758 cri.go:89] found id: ""
	I0804 00:17:07.450502   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.450511   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:07.450518   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:07.450564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:07.489972   64758 cri.go:89] found id: ""
	I0804 00:17:07.489997   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.490006   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:07.490012   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:07.490073   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:07.523685   64758 cri.go:89] found id: ""
	I0804 00:17:07.523713   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.523725   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:07.523732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:07.523789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:07.562636   64758 cri.go:89] found id: ""
	I0804 00:17:07.562665   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.562675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:07.562686   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:07.562702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:07.647968   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:07.648004   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.689829   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:07.689856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:07.738333   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:07.738366   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:07.753419   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:07.753448   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:07.829678   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.329981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:10.343676   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:10.343743   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:10.379546   64758 cri.go:89] found id: ""
	I0804 00:17:10.379575   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.379586   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:10.379594   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:10.379657   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:10.416247   64758 cri.go:89] found id: ""
	I0804 00:17:10.416271   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.416279   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:10.416284   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:10.416340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:10.455261   64758 cri.go:89] found id: ""
	I0804 00:17:10.455291   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.455303   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:10.455310   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:10.455373   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:10.493220   64758 cri.go:89] found id: ""
	I0804 00:17:10.493251   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.493262   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:10.493270   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:10.493329   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:10.538682   64758 cri.go:89] found id: ""
	I0804 00:17:10.538709   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.538720   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:10.538727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:10.538787   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:10.575509   64758 cri.go:89] found id: ""
	I0804 00:17:10.575535   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.575546   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:10.575553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:10.575609   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:10.613163   64758 cri.go:89] found id: ""
	I0804 00:17:10.613188   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.613196   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:10.613201   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:10.613260   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:10.648914   64758 cri.go:89] found id: ""
	I0804 00:17:10.648940   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.648947   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:10.648956   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:10.648968   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:10.700151   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:10.700187   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:10.714971   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:10.714998   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:10.787679   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.787698   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:10.787710   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:10.865008   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:10.865048   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:13.406150   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:13.419602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:13.419659   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:13.456823   64758 cri.go:89] found id: ""
	I0804 00:17:13.456852   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.456863   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:13.456870   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:13.456935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:13.493527   64758 cri.go:89] found id: ""
	I0804 00:17:13.493556   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.493567   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:13.493574   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:13.493697   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:13.529745   64758 cri.go:89] found id: ""
	I0804 00:17:13.529770   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.529784   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:13.529790   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:13.529856   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:13.567775   64758 cri.go:89] found id: ""
	I0804 00:17:13.567811   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.567819   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:13.567824   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:13.567888   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:13.604638   64758 cri.go:89] found id: ""
	I0804 00:17:13.604670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.604678   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:13.604685   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:13.604741   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:13.646638   64758 cri.go:89] found id: ""
	I0804 00:17:13.646670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.646679   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:13.646684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:13.646730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:13.694656   64758 cri.go:89] found id: ""
	I0804 00:17:13.694682   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.694693   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:13.694701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:13.694761   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:13.733738   64758 cri.go:89] found id: ""
	I0804 00:17:13.733762   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.733771   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:13.733780   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:13.733792   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:13.749747   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:13.749775   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:13.832826   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:13.832852   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:13.832868   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:13.914198   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:13.914233   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:13.952753   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:13.952787   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.503600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:16.516932   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:16.517004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:16.552012   64758 cri.go:89] found id: ""
	I0804 00:17:16.552037   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.552046   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:16.552052   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:16.552110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:16.590626   64758 cri.go:89] found id: ""
	I0804 00:17:16.590653   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.590660   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:16.590666   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:16.590732   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:16.628684   64758 cri.go:89] found id: ""
	I0804 00:17:16.628712   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.628723   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:16.628729   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:16.628792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:16.664934   64758 cri.go:89] found id: ""
	I0804 00:17:16.664969   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.664980   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:16.664987   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:16.665054   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:16.700098   64758 cri.go:89] found id: ""
	I0804 00:17:16.700127   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.700138   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:16.700144   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:16.700214   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:16.736761   64758 cri.go:89] found id: ""
	I0804 00:17:16.736786   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.736795   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:16.736800   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:16.736863   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:16.780010   64758 cri.go:89] found id: ""
	I0804 00:17:16.780033   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.780045   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:16.780050   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:16.780106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:16.816079   64758 cri.go:89] found id: ""
	I0804 00:17:16.816103   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.816112   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:16.816122   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:16.816136   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.866526   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:16.866560   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:16.881254   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:16.881287   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:16.952491   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:16.952515   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:16.952530   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:17.038943   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:17.038977   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:19.580078   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:19.595538   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:19.595601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:19.632206   64758 cri.go:89] found id: ""
	I0804 00:17:19.632234   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.632245   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:19.632252   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:19.632307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:19.670335   64758 cri.go:89] found id: ""
	I0804 00:17:19.670362   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.670377   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:19.670388   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:19.670447   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:19.707772   64758 cri.go:89] found id: ""
	I0804 00:17:19.707801   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.707812   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:19.707818   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:19.707877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:19.743822   64758 cri.go:89] found id: ""
	I0804 00:17:19.743855   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.743867   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:19.743874   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:19.743930   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:19.781592   64758 cri.go:89] found id: ""
	I0804 00:17:19.781622   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.781632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:19.781640   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:19.781698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:19.818792   64758 cri.go:89] found id: ""
	I0804 00:17:19.818815   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.818823   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:19.818829   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:19.818877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:19.856486   64758 cri.go:89] found id: ""
	I0804 00:17:19.856511   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.856522   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:19.856528   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:19.856586   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:19.901721   64758 cri.go:89] found id: ""
	I0804 00:17:19.901743   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.901754   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:19.901764   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:19.901780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:19.980095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:19.980119   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:19.980134   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:20.072699   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:20.072750   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:20.159007   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:20.159038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:20.211785   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:20.211818   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:22.727235   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:22.740922   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:22.740996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:22.780356   64758 cri.go:89] found id: ""
	I0804 00:17:22.780381   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.780392   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:22.780400   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:22.780459   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:22.817075   64758 cri.go:89] found id: ""
	I0804 00:17:22.817100   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.817111   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:22.817119   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:22.817182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:22.857213   64758 cri.go:89] found id: ""
	I0804 00:17:22.857243   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.857253   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:22.857260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:22.857325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:22.894049   64758 cri.go:89] found id: ""
	I0804 00:17:22.894085   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.894096   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:22.894104   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:22.894171   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:22.929718   64758 cri.go:89] found id: ""
	I0804 00:17:22.929746   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.929756   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:22.929770   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:22.929843   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:22.964863   64758 cri.go:89] found id: ""
	I0804 00:17:22.964892   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.964901   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:22.964907   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:22.964958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:23.002565   64758 cri.go:89] found id: ""
	I0804 00:17:23.002593   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.002603   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:23.002611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:23.002676   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:23.038161   64758 cri.go:89] found id: ""
	I0804 00:17:23.038188   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.038199   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:23.038211   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:23.038224   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:23.091865   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:23.091903   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:23.108358   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:23.108388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:23.186417   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:23.186438   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:23.186453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.269119   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:23.269161   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:25.812405   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:25.833174   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:25.833253   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:25.881654   64758 cri.go:89] found id: ""
	I0804 00:17:25.881681   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.881690   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:25.881696   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:25.881757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:25.936968   64758 cri.go:89] found id: ""
	I0804 00:17:25.936997   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.937006   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:25.937011   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:25.937066   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:25.972437   64758 cri.go:89] found id: ""
	I0804 00:17:25.972462   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.972470   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:25.972475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:25.972529   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:26.008306   64758 cri.go:89] found id: ""
	I0804 00:17:26.008346   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.008357   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:26.008366   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:26.008435   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:26.045593   64758 cri.go:89] found id: ""
	I0804 00:17:26.045620   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.045632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:26.045639   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:26.045696   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:26.084170   64758 cri.go:89] found id: ""
	I0804 00:17:26.084195   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.084205   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:26.084212   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:26.084272   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:26.122524   64758 cri.go:89] found id: ""
	I0804 00:17:26.122551   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.122559   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:26.122565   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:26.122623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:26.159264   64758 cri.go:89] found id: ""
	I0804 00:17:26.159297   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.159308   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:26.159320   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:26.159337   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:26.205692   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:26.205718   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:26.257286   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:26.257321   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:26.271582   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:26.271611   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:26.344562   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:26.344586   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:26.344598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:28.929410   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:28.943941   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:28.944003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:28.986127   64758 cri.go:89] found id: ""
	I0804 00:17:28.986157   64758 logs.go:276] 0 containers: []
	W0804 00:17:28.986169   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:28.986176   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:28.986237   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:29.025528   64758 cri.go:89] found id: ""
	I0804 00:17:29.025556   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.025564   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:29.025570   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:29.025624   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:29.059525   64758 cri.go:89] found id: ""
	I0804 00:17:29.059553   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.059561   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:29.059566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:29.059614   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:29.097451   64758 cri.go:89] found id: ""
	I0804 00:17:29.097489   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.097499   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:29.097506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:29.097564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:29.135504   64758 cri.go:89] found id: ""
	I0804 00:17:29.135532   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.135540   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:29.135546   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:29.135601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:29.175277   64758 cri.go:89] found id: ""
	I0804 00:17:29.175314   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.175324   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:29.175332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:29.175391   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:29.210275   64758 cri.go:89] found id: ""
	I0804 00:17:29.210303   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.210314   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:29.210321   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:29.210382   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:29.246138   64758 cri.go:89] found id: ""
	I0804 00:17:29.246174   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.246186   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:29.246196   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:29.246213   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:29.298935   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:29.298971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:29.313342   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:29.313388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:29.384609   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:29.384635   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:29.384650   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:29.461759   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:29.461795   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.010152   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:32.023609   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:32.023677   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:32.062480   64758 cri.go:89] found id: ""
	I0804 00:17:32.062508   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.062517   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:32.062523   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:32.062590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:32.099601   64758 cri.go:89] found id: ""
	I0804 00:17:32.099627   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.099634   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:32.099640   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:32.099691   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:32.138651   64758 cri.go:89] found id: ""
	I0804 00:17:32.138680   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.138689   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:32.138694   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:32.138751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:32.182224   64758 cri.go:89] found id: ""
	I0804 00:17:32.182249   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.182257   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:32.182264   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:32.182318   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:32.224381   64758 cri.go:89] found id: ""
	I0804 00:17:32.224410   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.224421   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:32.224429   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:32.224486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:32.261569   64758 cri.go:89] found id: ""
	I0804 00:17:32.261600   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.261609   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:32.261615   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:32.261663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:32.304769   64758 cri.go:89] found id: ""
	I0804 00:17:32.304793   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.304807   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:32.304814   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:32.304867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:32.348695   64758 cri.go:89] found id: ""
	I0804 00:17:32.348727   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.348736   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:32.348745   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:32.348757   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.389444   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:32.389473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:32.442901   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:32.442938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:32.457562   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:32.457588   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:32.529121   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:32.529144   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:32.529160   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.114712   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:35.129725   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:35.129795   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:35.167226   64758 cri.go:89] found id: ""
	I0804 00:17:35.167248   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.167257   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:35.167262   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:35.167310   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:35.200889   64758 cri.go:89] found id: ""
	I0804 00:17:35.200914   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.200922   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:35.200927   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:35.201000   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:35.234899   64758 cri.go:89] found id: ""
	I0804 00:17:35.234927   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.234938   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:35.234945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:35.235003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:35.271355   64758 cri.go:89] found id: ""
	I0804 00:17:35.271393   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.271405   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:35.271412   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:35.271471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:35.313557   64758 cri.go:89] found id: ""
	I0804 00:17:35.313585   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.313595   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:35.313602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:35.313663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:35.352931   64758 cri.go:89] found id: ""
	I0804 00:17:35.352960   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.352971   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:35.352979   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:35.353046   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:35.391202   64758 cri.go:89] found id: ""
	I0804 00:17:35.391232   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.391256   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:35.391263   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:35.391337   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:35.427599   64758 cri.go:89] found id: ""
	I0804 00:17:35.427627   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.427638   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:35.427649   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:35.427666   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:35.482025   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:35.482061   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:35.498274   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:35.498303   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:35.572606   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:35.572631   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:35.572644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.655534   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:35.655566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:38.205756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:38.218974   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:38.219044   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:38.253798   64758 cri.go:89] found id: ""
	I0804 00:17:38.253827   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.253839   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:38.253852   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:38.253911   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:38.291074   64758 cri.go:89] found id: ""
	I0804 00:17:38.291102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.291113   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:38.291120   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:38.291182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:38.332097   64758 cri.go:89] found id: ""
	I0804 00:17:38.332123   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.332133   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:38.332140   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:38.332198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:38.370074   64758 cri.go:89] found id: ""
	I0804 00:17:38.370102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.370110   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:38.370117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:38.370176   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:38.406962   64758 cri.go:89] found id: ""
	I0804 00:17:38.406984   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.406993   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:38.406998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:38.407051   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:38.447532   64758 cri.go:89] found id: ""
	I0804 00:17:38.447562   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.447572   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:38.447579   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:38.447653   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:38.484326   64758 cri.go:89] found id: ""
	I0804 00:17:38.484356   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.484368   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:38.484375   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:38.484444   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:38.521831   64758 cri.go:89] found id: ""
	I0804 00:17:38.521858   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.521869   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:38.521880   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:38.521893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:38.570540   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:38.570569   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:38.624921   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:38.624953   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:38.639451   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:38.639477   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:38.714435   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:38.714459   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:38.714475   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:41.295160   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:41.310032   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:41.310108   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:41.350363   64758 cri.go:89] found id: ""
	I0804 00:17:41.350393   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.350404   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:41.350412   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:41.350475   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:41.391662   64758 cri.go:89] found id: ""
	I0804 00:17:41.391691   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.391698   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:41.391703   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:41.391760   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:41.429653   64758 cri.go:89] found id: ""
	I0804 00:17:41.429678   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.429686   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:41.429692   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:41.429739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:41.469456   64758 cri.go:89] found id: ""
	I0804 00:17:41.469483   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.469494   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:41.469505   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:41.469566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:41.506124   64758 cri.go:89] found id: ""
	I0804 00:17:41.506154   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.506164   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:41.506171   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:41.506234   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:41.543139   64758 cri.go:89] found id: ""
	I0804 00:17:41.543171   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.543182   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:41.543190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:41.543252   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:41.580537   64758 cri.go:89] found id: ""
	I0804 00:17:41.580568   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.580578   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:41.580585   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:41.580652   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:41.619828   64758 cri.go:89] found id: ""
	I0804 00:17:41.619854   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.619862   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:41.619869   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:41.619882   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:41.660749   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:41.660780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:41.712889   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:41.712924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:41.726422   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:41.726447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:41.805673   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:41.805697   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:41.805712   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:44.386563   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:44.399891   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:44.399954   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:44.434270   64758 cri.go:89] found id: ""
	I0804 00:17:44.434297   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.434305   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:44.434311   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:44.434372   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:44.469423   64758 cri.go:89] found id: ""
	I0804 00:17:44.469454   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.469463   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:44.469468   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:44.469535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:44.505511   64758 cri.go:89] found id: ""
	I0804 00:17:44.505539   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.505547   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:44.505553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:44.505602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:44.540897   64758 cri.go:89] found id: ""
	I0804 00:17:44.540922   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.540932   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:44.540937   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:44.540996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:44.578722   64758 cri.go:89] found id: ""
	I0804 00:17:44.578747   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.578755   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:44.578760   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:44.578812   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:44.615838   64758 cri.go:89] found id: ""
	I0804 00:17:44.615863   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.615874   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:44.615881   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:44.615940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:44.657695   64758 cri.go:89] found id: ""
	I0804 00:17:44.657724   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.657734   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:44.657741   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:44.657916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:44.695852   64758 cri.go:89] found id: ""
	I0804 00:17:44.695882   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.695892   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:44.695901   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:44.695912   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:44.754643   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:44.754687   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:44.773964   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:44.773994   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:44.857544   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:44.857567   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:44.857583   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:44.952987   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:44.953027   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:47.504957   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:47.520153   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:47.520232   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:47.557303   64758 cri.go:89] found id: ""
	I0804 00:17:47.557326   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.557334   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:47.557339   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:47.557410   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:47.595626   64758 cri.go:89] found id: ""
	I0804 00:17:47.595655   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.595665   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:47.595675   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:47.595733   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:47.633430   64758 cri.go:89] found id: ""
	I0804 00:17:47.633458   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.633466   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:47.633472   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:47.633525   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:47.670116   64758 cri.go:89] found id: ""
	I0804 00:17:47.670140   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.670149   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:47.670154   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:47.670200   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:47.709019   64758 cri.go:89] found id: ""
	I0804 00:17:47.709042   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.709050   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:47.709055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:47.709111   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:47.745230   64758 cri.go:89] found id: ""
	I0804 00:17:47.745251   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.745259   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:47.745265   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:47.745319   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:47.787957   64758 cri.go:89] found id: ""
	I0804 00:17:47.787985   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.787996   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:47.788004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:47.788063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:47.821451   64758 cri.go:89] found id: ""
	I0804 00:17:47.821477   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.821488   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:47.821498   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:47.821516   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:47.903035   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:47.903139   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:47.903162   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:47.986659   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:47.986702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:48.037921   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:48.037951   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:48.095354   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:48.095389   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:50.613264   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:50.627717   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:50.627792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:50.669311   64758 cri.go:89] found id: ""
	I0804 00:17:50.669338   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.669347   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:50.669370   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:50.669438   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:50.714674   64758 cri.go:89] found id: ""
	I0804 00:17:50.714704   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.714713   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:50.714718   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:50.714769   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:50.755291   64758 cri.go:89] found id: ""
	I0804 00:17:50.755318   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.755326   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:50.755332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:50.755394   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:50.801927   64758 cri.go:89] found id: ""
	I0804 00:17:50.801955   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.801964   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:50.801970   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:50.802020   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:50.845096   64758 cri.go:89] found id: ""
	I0804 00:17:50.845121   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.845130   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:50.845136   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:50.845193   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:50.882664   64758 cri.go:89] found id: ""
	I0804 00:17:50.882694   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.882705   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:50.882712   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:50.882771   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:50.921233   64758 cri.go:89] found id: ""
	I0804 00:17:50.921260   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.921268   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:50.921273   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:50.921326   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:50.955254   64758 cri.go:89] found id: ""
	I0804 00:17:50.955286   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.955298   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:50.955311   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:50.955329   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:51.010001   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:51.010037   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:51.024943   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:51.024966   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:51.096095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:51.096123   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:51.096139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:51.177829   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:51.177864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:53.720665   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:53.736318   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:53.736380   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:53.772887   64758 cri.go:89] found id: ""
	I0804 00:17:53.772916   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.772926   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:53.772934   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:53.772995   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:53.811771   64758 cri.go:89] found id: ""
	I0804 00:17:53.811797   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.811837   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:53.811845   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:53.811906   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:53.846684   64758 cri.go:89] found id: ""
	I0804 00:17:53.846716   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.846726   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:53.846736   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:53.846798   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:53.883550   64758 cri.go:89] found id: ""
	I0804 00:17:53.883581   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.883592   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:53.883600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:53.883662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:53.921031   64758 cri.go:89] found id: ""
	I0804 00:17:53.921061   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.921072   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:53.921080   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:53.921153   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:53.960338   64758 cri.go:89] found id: ""
	I0804 00:17:53.960364   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.960374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:53.960381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:53.960441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:53.998404   64758 cri.go:89] found id: ""
	I0804 00:17:53.998434   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.998450   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:53.998458   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:53.998520   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:54.033417   64758 cri.go:89] found id: ""
	I0804 00:17:54.033444   64758 logs.go:276] 0 containers: []
	W0804 00:17:54.033453   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:54.033461   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:54.033473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:54.071945   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:54.071971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:54.124614   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:54.124644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:54.140757   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:54.140783   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:54.241735   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:54.241754   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:54.241769   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:56.821591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:56.836569   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:56.836631   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:56.872013   64758 cri.go:89] found id: ""
	I0804 00:17:56.872039   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.872048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:56.872054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:56.872110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:56.908022   64758 cri.go:89] found id: ""
	I0804 00:17:56.908051   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.908061   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:56.908067   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:56.908114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:56.943309   64758 cri.go:89] found id: ""
	I0804 00:17:56.943336   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.943347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:56.943359   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:56.943415   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:56.977799   64758 cri.go:89] found id: ""
	I0804 00:17:56.977839   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.977847   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:56.977853   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:56.977916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:57.015185   64758 cri.go:89] found id: ""
	I0804 00:17:57.015213   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.015223   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:57.015237   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:57.015295   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:57.051856   64758 cri.go:89] found id: ""
	I0804 00:17:57.051879   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.051887   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:57.051893   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:57.051944   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:57.086349   64758 cri.go:89] found id: ""
	I0804 00:17:57.086376   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.086387   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:57.086393   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:57.086439   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:57.125005   64758 cri.go:89] found id: ""
	I0804 00:17:57.125048   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.125064   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:57.125076   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:57.125090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:57.200348   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:57.200382   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:57.240899   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:57.240924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.294331   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:57.294375   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:57.308388   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:57.308429   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:57.382602   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:59.883070   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:59.897055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:59.897116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:59.932983   64758 cri.go:89] found id: ""
	I0804 00:17:59.933012   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.933021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:59.933029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:59.933088   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:59.971781   64758 cri.go:89] found id: ""
	I0804 00:17:59.971807   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.971815   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:59.971820   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:59.971878   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:00.008381   64758 cri.go:89] found id: ""
	I0804 00:18:00.008406   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.008414   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:00.008419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:00.008483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:00.053257   64758 cri.go:89] found id: ""
	I0804 00:18:00.053281   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.053290   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:00.053295   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:00.053342   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:00.089891   64758 cri.go:89] found id: ""
	I0804 00:18:00.089925   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.089936   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:00.089943   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:00.090008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:00.129833   64758 cri.go:89] found id: ""
	I0804 00:18:00.129863   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.129875   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:00.129884   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:00.129942   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:00.181324   64758 cri.go:89] found id: ""
	I0804 00:18:00.181390   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.181403   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:00.181410   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:00.181471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:00.224426   64758 cri.go:89] found id: ""
	I0804 00:18:00.224451   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.224459   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:00.224467   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:00.224481   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:00.240122   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:00.240155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:00.317324   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:00.317346   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:00.317379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:00.398917   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:00.398952   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:00.440730   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:00.440758   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:02.992128   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:03.006787   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:03.006870   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:03.041291   64758 cri.go:89] found id: ""
	I0804 00:18:03.041321   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.041332   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:03.041341   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:03.041427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:03.077822   64758 cri.go:89] found id: ""
	I0804 00:18:03.077851   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.077863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:03.077871   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:03.077928   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:03.116579   64758 cri.go:89] found id: ""
	I0804 00:18:03.116603   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.116611   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:03.116616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:03.116662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:03.154904   64758 cri.go:89] found id: ""
	I0804 00:18:03.154931   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.154942   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:03.154950   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:03.155018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:03.190300   64758 cri.go:89] found id: ""
	I0804 00:18:03.190328   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.190341   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:03.190349   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:03.190413   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:03.225975   64758 cri.go:89] found id: ""
	I0804 00:18:03.226004   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.226016   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:03.226023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:03.226087   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:03.271499   64758 cri.go:89] found id: ""
	I0804 00:18:03.271525   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.271535   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:03.271543   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:03.271602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:03.308643   64758 cri.go:89] found id: ""
	I0804 00:18:03.308668   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.308675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:03.308684   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:03.308698   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:03.324528   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:03.324562   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:03.401102   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:03.401125   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:03.401139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:03.481817   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:03.481864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:03.522568   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:03.522601   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.074678   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:06.089765   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:06.089844   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:06.128372   64758 cri.go:89] found id: ""
	I0804 00:18:06.128400   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.128411   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:06.128419   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:06.128467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:06.169488   64758 cri.go:89] found id: ""
	I0804 00:18:06.169515   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.169525   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:06.169532   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:06.169590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:06.207969   64758 cri.go:89] found id: ""
	I0804 00:18:06.207998   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.208009   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:06.208015   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:06.208067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:06.244497   64758 cri.go:89] found id: ""
	I0804 00:18:06.244521   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.244529   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:06.244535   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:06.244592   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:06.282905   64758 cri.go:89] found id: ""
	I0804 00:18:06.282935   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.282945   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:06.282952   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:06.283013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:06.322498   64758 cri.go:89] found id: ""
	I0804 00:18:06.322523   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.322530   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:06.322537   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:06.322583   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:06.361377   64758 cri.go:89] found id: ""
	I0804 00:18:06.361402   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.361412   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:06.361420   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:06.361485   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:06.402082   64758 cri.go:89] found id: ""
	I0804 00:18:06.402112   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.402120   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:06.402128   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:06.402141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.452052   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:06.452089   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:06.466695   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:06.466734   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:06.546115   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:06.546140   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:06.546155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:06.639671   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:06.639708   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:09.193473   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:09.207696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:09.207755   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:09.247757   64758 cri.go:89] found id: ""
	I0804 00:18:09.247784   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.247795   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:09.247802   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:09.247867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:09.285516   64758 cri.go:89] found id: ""
	I0804 00:18:09.285549   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.285559   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:09.285567   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:09.285628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:09.321689   64758 cri.go:89] found id: ""
	I0804 00:18:09.321715   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.321725   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:09.321732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:09.321789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:09.358135   64758 cri.go:89] found id: ""
	I0804 00:18:09.358158   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.358166   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:09.358176   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:09.358223   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:09.393642   64758 cri.go:89] found id: ""
	I0804 00:18:09.393667   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.393675   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:09.393681   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:09.393730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:09.430651   64758 cri.go:89] found id: ""
	I0804 00:18:09.430674   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.430683   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:09.430689   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:09.430734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:09.472433   64758 cri.go:89] found id: ""
	I0804 00:18:09.472460   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.472469   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:09.472474   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:09.472533   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:09.511147   64758 cri.go:89] found id: ""
	I0804 00:18:09.511171   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.511179   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:09.511187   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:09.511207   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:09.560099   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:09.560142   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:09.574609   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:09.574641   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:09.646863   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:09.646891   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:09.646906   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:09.727309   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:09.727352   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:12.268925   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:12.284737   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:12.284813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:12.326015   64758 cri.go:89] found id: ""
	I0804 00:18:12.326036   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.326044   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:12.326049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:12.326095   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:12.374096   64758 cri.go:89] found id: ""
	I0804 00:18:12.374129   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.374138   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:12.374143   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:12.374235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:12.426467   64758 cri.go:89] found id: ""
	I0804 00:18:12.426493   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.426502   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:12.426509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:12.426570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:12.485034   64758 cri.go:89] found id: ""
	I0804 00:18:12.485060   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.485072   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:12.485079   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:12.485138   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:12.523490   64758 cri.go:89] found id: ""
	I0804 00:18:12.523517   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.523525   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:12.523530   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:12.523577   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:12.563318   64758 cri.go:89] found id: ""
	I0804 00:18:12.563347   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.563358   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:12.563365   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:12.563425   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:12.600455   64758 cri.go:89] found id: ""
	I0804 00:18:12.600482   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.600492   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:12.600499   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:12.600566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:12.641146   64758 cri.go:89] found id: ""
	I0804 00:18:12.641170   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.641178   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:12.641186   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:12.641197   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:12.697240   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:12.697274   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:12.711399   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:12.711432   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:12.794022   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:12.794050   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:12.794067   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:12.881327   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:12.881379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:15.425765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:15.439338   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:15.439420   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:15.477964   64758 cri.go:89] found id: ""
	I0804 00:18:15.477991   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.478002   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:15.478009   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:15.478069   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:15.514554   64758 cri.go:89] found id: ""
	I0804 00:18:15.514574   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.514583   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:15.514588   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:15.514636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:15.549702   64758 cri.go:89] found id: ""
	I0804 00:18:15.549732   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.549741   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:15.549747   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:15.549813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:15.584619   64758 cri.go:89] found id: ""
	I0804 00:18:15.584663   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.584675   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:15.584683   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:15.584746   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:15.625084   64758 cri.go:89] found id: ""
	I0804 00:18:15.625111   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.625121   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:15.625128   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:15.625192   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:15.666629   64758 cri.go:89] found id: ""
	I0804 00:18:15.666655   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.666664   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:15.666673   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:15.666726   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:15.704287   64758 cri.go:89] found id: ""
	I0804 00:18:15.704316   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.704324   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:15.704330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:15.704383   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:15.740629   64758 cri.go:89] found id: ""
	I0804 00:18:15.740659   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.740668   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:15.740678   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:15.740702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:15.794093   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:15.794124   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:15.807629   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:15.807659   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:15.887638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:15.887665   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:15.887680   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:15.972935   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:15.972978   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:18.518022   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:18.532360   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:18.532433   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:18.565519   64758 cri.go:89] found id: ""
	I0804 00:18:18.565544   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.565552   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:18.565557   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:18.565612   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:18.599939   64758 cri.go:89] found id: ""
	I0804 00:18:18.599967   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.599978   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:18.599985   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:18.600055   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:18.639035   64758 cri.go:89] found id: ""
	I0804 00:18:18.639062   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.639070   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:18.639076   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:18.639124   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:18.677188   64758 cri.go:89] found id: ""
	I0804 00:18:18.677210   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.677218   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:18.677223   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:18.677268   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:18.715892   64758 cri.go:89] found id: ""
	I0804 00:18:18.715921   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.715932   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:18.715940   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:18.716005   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:18.752274   64758 cri.go:89] found id: ""
	I0804 00:18:18.752298   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.752307   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:18.752313   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:18.752368   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:18.795251   64758 cri.go:89] found id: ""
	I0804 00:18:18.795279   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.795288   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:18.795293   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:18.795353   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.830842   64758 cri.go:89] found id: ""
	I0804 00:18:18.830866   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.830874   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:18.830882   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:18.830893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:18.883687   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:18.883719   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:18.898406   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:18.898433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:18.973191   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:18.973215   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:18.973231   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:19.054094   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:19.054141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:21.597245   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:21.612534   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:21.612605   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:21.649391   64758 cri.go:89] found id: ""
	I0804 00:18:21.649415   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.649426   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:21.649434   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:21.649492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:21.683202   64758 cri.go:89] found id: ""
	I0804 00:18:21.683226   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.683233   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:21.683244   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:21.683300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:21.717450   64758 cri.go:89] found id: ""
	I0804 00:18:21.717475   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.717484   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:21.717489   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:21.717547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:21.752559   64758 cri.go:89] found id: ""
	I0804 00:18:21.752588   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.752596   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:21.752602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:21.752650   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:21.788336   64758 cri.go:89] found id: ""
	I0804 00:18:21.788364   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.788375   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:21.788381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:21.788428   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:21.829404   64758 cri.go:89] found id: ""
	I0804 00:18:21.829428   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.829436   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:21.829443   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:21.829502   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:21.869473   64758 cri.go:89] found id: ""
	I0804 00:18:21.869504   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.869515   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:21.869521   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:21.869587   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:21.909883   64758 cri.go:89] found id: ""
	I0804 00:18:21.909907   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.909915   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:21.909923   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:21.909940   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:21.925038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:21.925071   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:22.000261   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.000281   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:22.000294   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:22.082813   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:22.082846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:22.126741   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:22.126774   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:24.677246   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:24.692404   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:24.692467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:24.739001   64758 cri.go:89] found id: ""
	I0804 00:18:24.739039   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.739049   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:24.739054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:24.739119   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:24.779558   64758 cri.go:89] found id: ""
	I0804 00:18:24.779586   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.779597   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:24.779605   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:24.779664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:24.819257   64758 cri.go:89] found id: ""
	I0804 00:18:24.819284   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.819295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:24.819301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:24.819363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:24.862504   64758 cri.go:89] found id: ""
	I0804 00:18:24.862531   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.862539   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:24.862544   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:24.862599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:24.899605   64758 cri.go:89] found id: ""
	I0804 00:18:24.899637   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.899649   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:24.899656   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:24.899716   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:24.936575   64758 cri.go:89] found id: ""
	I0804 00:18:24.936604   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.936612   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:24.936618   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:24.936667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:24.971736   64758 cri.go:89] found id: ""
	I0804 00:18:24.971764   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.971775   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:24.971782   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:24.971851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:25.010214   64758 cri.go:89] found id: ""
	I0804 00:18:25.010244   64758 logs.go:276] 0 containers: []
	W0804 00:18:25.010253   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:25.010265   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:25.010279   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:25.091145   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:25.091186   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:25.137574   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:25.137603   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:25.189559   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:25.189593   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:25.204725   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:25.204763   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:25.278903   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:27.779500   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:27.793548   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:27.793628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:27.830811   64758 cri.go:89] found id: ""
	I0804 00:18:27.830844   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.830854   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:27.830862   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:27.830919   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:27.869966   64758 cri.go:89] found id: ""
	I0804 00:18:27.869991   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.869998   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:27.870004   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:27.870062   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:27.909474   64758 cri.go:89] found id: ""
	I0804 00:18:27.909496   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.909504   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:27.909509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:27.909567   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:27.948588   64758 cri.go:89] found id: ""
	I0804 00:18:27.948613   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.948625   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:27.948632   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:27.948704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:27.991957   64758 cri.go:89] found id: ""
	I0804 00:18:27.991979   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.991987   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:27.991993   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:27.992052   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:28.029516   64758 cri.go:89] found id: ""
	I0804 00:18:28.029544   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.029555   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:28.029562   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:28.029627   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:28.067851   64758 cri.go:89] found id: ""
	I0804 00:18:28.067879   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.067891   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:28.067898   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:28.067957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:28.107488   64758 cri.go:89] found id: ""
	I0804 00:18:28.107514   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.107524   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:28.107534   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:28.107548   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:28.158490   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:28.158523   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:28.172000   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:28.172030   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:28.247803   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:28.247823   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:28.247839   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:28.326695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:28.326727   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:30.867241   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:30.881074   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:30.881146   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:30.919078   64758 cri.go:89] found id: ""
	I0804 00:18:30.919105   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.919115   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:30.919122   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:30.919184   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:30.954436   64758 cri.go:89] found id: ""
	I0804 00:18:30.954463   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.954474   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:30.954481   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:30.954546   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:30.993080   64758 cri.go:89] found id: ""
	I0804 00:18:30.993110   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.993121   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:30.993129   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:30.993188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:31.031465   64758 cri.go:89] found id: ""
	I0804 00:18:31.031493   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.031504   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:31.031512   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:31.031570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:31.067374   64758 cri.go:89] found id: ""
	I0804 00:18:31.067405   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.067416   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:31.067423   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:31.067493   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:31.104021   64758 cri.go:89] found id: ""
	I0804 00:18:31.104048   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.104059   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:31.104066   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:31.104128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:31.146995   64758 cri.go:89] found id: ""
	I0804 00:18:31.147023   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.147033   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:31.147040   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:31.147106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:31.184708   64758 cri.go:89] found id: ""
	I0804 00:18:31.184739   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.184749   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:31.184760   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:31.184776   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:31.237743   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:31.237781   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:31.252038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:31.252070   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:31.326357   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:31.326380   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:31.326401   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:31.408212   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:31.408256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:33.954396   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:33.968311   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:33.968384   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:34.006574   64758 cri.go:89] found id: ""
	I0804 00:18:34.006605   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.006625   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:34.006635   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:34.006698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:34.042400   64758 cri.go:89] found id: ""
	I0804 00:18:34.042427   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.042435   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:34.042441   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:34.042492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:34.080769   64758 cri.go:89] found id: ""
	I0804 00:18:34.080793   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.080804   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:34.080810   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:34.080877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:34.118283   64758 cri.go:89] found id: ""
	I0804 00:18:34.118311   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.118320   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:34.118326   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:34.118377   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:34.153679   64758 cri.go:89] found id: ""
	I0804 00:18:34.153708   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.153719   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:34.153727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:34.153780   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:34.189618   64758 cri.go:89] found id: ""
	I0804 00:18:34.189674   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.189686   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:34.189696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:34.189770   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:34.224628   64758 cri.go:89] found id: ""
	I0804 00:18:34.224666   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.224677   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:34.224684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:34.224744   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:34.265343   64758 cri.go:89] found id: ""
	I0804 00:18:34.265389   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.265399   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:34.265409   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:34.265428   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:34.337992   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:34.338014   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:34.338025   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:34.420224   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:34.420263   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:34.462009   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:34.462042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:34.520087   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:34.520120   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:37.035398   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:37.048955   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:37.049024   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:37.087433   64758 cri.go:89] found id: ""
	I0804 00:18:37.087460   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.087470   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:37.087478   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:37.087542   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:37.128227   64758 cri.go:89] found id: ""
	I0804 00:18:37.128255   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.128267   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:37.128275   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:37.128328   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:37.165371   64758 cri.go:89] found id: ""
	I0804 00:18:37.165405   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.165415   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:37.165424   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:37.165486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:37.201168   64758 cri.go:89] found id: ""
	I0804 00:18:37.201198   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.201209   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:37.201217   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:37.201278   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:37.237378   64758 cri.go:89] found id: ""
	I0804 00:18:37.237406   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.237414   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:37.237419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:37.237465   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:37.273425   64758 cri.go:89] found id: ""
	I0804 00:18:37.273456   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.273467   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:37.273475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:37.273547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:37.313019   64758 cri.go:89] found id: ""
	I0804 00:18:37.313048   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.313056   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:37.313061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:37.313116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:37.354741   64758 cri.go:89] found id: ""
	I0804 00:18:37.354771   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.354779   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:37.354788   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:37.354800   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:37.408703   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:37.408740   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:37.423393   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:37.423419   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:37.497460   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:37.497487   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:37.497501   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:37.579811   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:37.579856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:40.122872   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:40.139106   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:40.139177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:40.178571   64758 cri.go:89] found id: ""
	I0804 00:18:40.178599   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.178610   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:40.178617   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:40.178679   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:40.215680   64758 cri.go:89] found id: ""
	I0804 00:18:40.215714   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.215722   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:40.215728   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:40.215776   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:40.250618   64758 cri.go:89] found id: ""
	I0804 00:18:40.250647   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.250658   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:40.250666   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:40.250729   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:40.289195   64758 cri.go:89] found id: ""
	I0804 00:18:40.289223   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.289233   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:40.289240   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:40.289296   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:40.330961   64758 cri.go:89] found id: ""
	I0804 00:18:40.330988   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.330998   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:40.331006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:40.331056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:40.376435   64758 cri.go:89] found id: ""
	I0804 00:18:40.376465   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.376478   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:40.376487   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:40.376558   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:40.416415   64758 cri.go:89] found id: ""
	I0804 00:18:40.416447   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.416459   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:40.416465   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:40.416535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:40.452958   64758 cri.go:89] found id: ""
	I0804 00:18:40.452996   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.453007   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:40.453018   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:40.453036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:40.503775   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:40.503808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:40.517825   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:40.517855   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:40.587818   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:40.587847   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:40.587861   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:40.674139   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:40.674183   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:43.217266   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:43.232190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:43.232262   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:43.270127   64758 cri.go:89] found id: ""
	I0804 00:18:43.270156   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.270163   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:43.270169   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:43.270219   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:43.309401   64758 cri.go:89] found id: ""
	I0804 00:18:43.309429   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.309439   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:43.309446   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:43.309503   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:43.347210   64758 cri.go:89] found id: ""
	I0804 00:18:43.347235   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.347242   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:43.347247   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:43.347300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:43.382548   64758 cri.go:89] found id: ""
	I0804 00:18:43.382578   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.382588   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:43.382595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:43.382658   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:43.422076   64758 cri.go:89] found id: ""
	I0804 00:18:43.422102   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.422113   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:43.422121   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:43.422168   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:43.458525   64758 cri.go:89] found id: ""
	I0804 00:18:43.458552   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.458560   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:43.458566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:43.458623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:43.498134   64758 cri.go:89] found id: ""
	I0804 00:18:43.498157   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.498165   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:43.498170   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:43.498217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:43.543289   64758 cri.go:89] found id: ""
	I0804 00:18:43.543312   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.543320   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:43.543328   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:43.543338   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:43.593489   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:43.593521   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:43.607838   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:43.607869   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:43.682791   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:43.682813   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:43.682826   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:43.761695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:43.761737   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.305385   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:46.320003   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:46.320063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:46.367941   64758 cri.go:89] found id: ""
	I0804 00:18:46.367969   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.367980   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:46.367986   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:46.368058   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:46.422540   64758 cri.go:89] found id: ""
	I0804 00:18:46.422563   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.422572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:46.422578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:46.422637   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:46.470192   64758 cri.go:89] found id: ""
	I0804 00:18:46.470238   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.470248   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:46.470257   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:46.470316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:46.512375   64758 cri.go:89] found id: ""
	I0804 00:18:46.512399   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.512408   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:46.512413   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:46.512471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:46.546547   64758 cri.go:89] found id: ""
	I0804 00:18:46.546580   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.546592   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:46.546600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:46.546665   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:46.583598   64758 cri.go:89] found id: ""
	I0804 00:18:46.583621   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.583630   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:46.583636   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:46.583692   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:46.621066   64758 cri.go:89] found id: ""
	I0804 00:18:46.621101   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.621116   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:46.621122   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:46.621177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:46.654115   64758 cri.go:89] found id: ""
	I0804 00:18:46.654149   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.654162   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:46.654174   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:46.654191   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:46.738542   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:46.738582   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.778894   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:46.778923   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:46.833225   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:46.833257   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:46.847222   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:46.847247   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:46.922590   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.423639   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:49.437417   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:49.437490   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:49.474889   64758 cri.go:89] found id: ""
	I0804 00:18:49.474914   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.474923   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:49.474929   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:49.474986   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:49.512860   64758 cri.go:89] found id: ""
	I0804 00:18:49.512889   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.512900   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:49.512908   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:49.512965   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:49.550558   64758 cri.go:89] found id: ""
	I0804 00:18:49.550594   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.550603   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:49.550611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:49.550671   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:49.587779   64758 cri.go:89] found id: ""
	I0804 00:18:49.587810   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.587823   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:49.587831   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:49.587890   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:49.630307   64758 cri.go:89] found id: ""
	I0804 00:18:49.630333   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.630344   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:49.630352   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:49.630411   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:49.665013   64758 cri.go:89] found id: ""
	I0804 00:18:49.665046   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.665057   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:49.665064   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:49.665127   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:49.701375   64758 cri.go:89] found id: ""
	I0804 00:18:49.701401   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.701410   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:49.701415   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:49.701472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:49.737237   64758 cri.go:89] found id: ""
	I0804 00:18:49.737261   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.737269   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:49.737278   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:49.737291   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:49.790998   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:49.791033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:49.804933   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:49.804965   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:49.877997   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.878019   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:49.878035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:49.963836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:49.963872   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:52.506621   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:52.521482   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:52.521553   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:52.555980   64758 cri.go:89] found id: ""
	I0804 00:18:52.556010   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.556021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:52.556029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:52.556094   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:52.593088   64758 cri.go:89] found id: ""
	I0804 00:18:52.593119   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.593130   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:52.593138   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:52.593197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:52.632058   64758 cri.go:89] found id: ""
	I0804 00:18:52.632088   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.632107   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:52.632115   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:52.632177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:52.668701   64758 cri.go:89] found id: ""
	I0804 00:18:52.668730   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.668739   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:52.668745   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:52.668814   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:52.705041   64758 cri.go:89] found id: ""
	I0804 00:18:52.705068   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.705075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:52.705089   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:52.705149   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:52.743304   64758 cri.go:89] found id: ""
	I0804 00:18:52.743327   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.743335   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:52.743340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:52.743397   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:52.781020   64758 cri.go:89] found id: ""
	I0804 00:18:52.781050   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.781060   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:52.781073   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:52.781134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:52.820979   64758 cri.go:89] found id: ""
	I0804 00:18:52.821004   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.821014   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:52.821024   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:52.821042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:52.876450   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:52.876488   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:52.890529   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:52.890566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:52.960682   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:52.960710   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:52.960725   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:53.044000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:53.044040   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:55.601594   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:55.615574   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:55.615645   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:55.655116   64758 cri.go:89] found id: ""
	I0804 00:18:55.655146   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.655157   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:55.655164   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:55.655217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:55.695809   64758 cri.go:89] found id: ""
	I0804 00:18:55.695837   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.695846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:55.695851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:55.695909   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:55.732784   64758 cri.go:89] found id: ""
	I0804 00:18:55.732811   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.732822   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:55.732828   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:55.732920   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:55.773316   64758 cri.go:89] found id: ""
	I0804 00:18:55.773338   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.773347   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:55.773368   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:55.773416   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:55.808886   64758 cri.go:89] found id: ""
	I0804 00:18:55.808913   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.808924   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:55.808931   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:55.808990   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:55.848471   64758 cri.go:89] found id: ""
	I0804 00:18:55.848499   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.848507   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:55.848513   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:55.848568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:55.884088   64758 cri.go:89] found id: ""
	I0804 00:18:55.884117   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.884128   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:55.884134   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:55.884194   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:55.918194   64758 cri.go:89] found id: ""
	I0804 00:18:55.918222   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.918233   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:55.918243   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:55.918264   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:55.932685   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:55.932717   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:56.003817   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:56.003840   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:56.003856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:56.087804   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:56.087846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:56.129959   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:56.129993   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:58.685077   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:58.698624   64758 kubeadm.go:597] duration metric: took 4m4.179874556s to restartPrimaryControlPlane
	W0804 00:18:58.698704   64758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:18:58.698731   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:19:03.967117   64758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.268366381s)
	I0804 00:19:03.967202   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:19:03.982098   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:19:03.991994   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:19:04.002780   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:19:04.002802   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:19:04.002845   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:19:04.012216   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:19:04.012279   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:19:04.021463   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:19:04.030689   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:19:04.030743   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:19:04.040801   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.050496   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:19:04.050558   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.060782   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:19:04.071595   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:19:04.071673   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:19:04.082123   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:19:04.313172   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:21:00.664979   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:21:00.665100   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:21:00.666810   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:00.666904   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:00.667020   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:00.667150   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:00.667278   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:00.667370   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:00.670254   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:00.670337   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:00.670431   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:00.670537   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:00.670623   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:00.670721   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:00.670788   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:00.670883   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:00.670990   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:00.671079   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:00.671168   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:00.671217   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:00.671265   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:00.671359   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:00.671442   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:00.671529   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:00.671611   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:00.671756   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:00.671856   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:00.671888   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:00.671940   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:00.673410   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:00.673506   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:00.673573   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:00.673627   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:00.673692   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:00.673828   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:00.673876   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:00.673972   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674207   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674283   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674517   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674590   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674752   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674851   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675053   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675173   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675451   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675463   64758 kubeadm.go:310] 
	I0804 00:21:00.675511   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:21:00.675561   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:21:00.675571   64758 kubeadm.go:310] 
	I0804 00:21:00.675614   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:21:00.675656   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:21:00.675787   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:21:00.675797   64758 kubeadm.go:310] 
	I0804 00:21:00.675928   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:21:00.675970   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:21:00.676009   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:21:00.676026   64758 kubeadm.go:310] 
	I0804 00:21:00.676172   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:21:00.676278   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:21:00.676289   64758 kubeadm.go:310] 
	I0804 00:21:00.676393   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:21:00.676466   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:21:00.676532   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:21:00.676609   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:21:00.676632   64758 kubeadm.go:310] 
	W0804 00:21:00.676723   64758 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 00:21:00.676765   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:21:01.138781   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:21:01.154405   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:21:01.164426   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:21:01.164445   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:21:01.164496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:21:01.173853   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:21:01.173907   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:21:01.183634   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:21:01.193283   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:21:01.193342   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:21:01.202427   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.212186   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:21:01.212235   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.222754   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:21:01.232996   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:21:01.233059   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:21:01.243778   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:21:01.319895   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:01.319975   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:01.474907   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:01.475029   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:01.475119   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:01.683624   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:01.685482   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:01.685584   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:01.685691   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:01.685792   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:01.685880   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:01.685991   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:01.686067   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:01.686147   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:01.686285   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:01.686399   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:01.686513   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:01.686600   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:01.686670   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:01.985613   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:02.088377   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:02.336621   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:02.448654   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:02.470140   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:02.471390   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:02.471456   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:02.610840   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:02.612641   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:02.612744   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:02.627044   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:02.629019   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:02.630430   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:02.633022   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:42.635581   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:42.635740   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:42.636036   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:47.636656   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:47.636879   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:57.637900   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:57.638098   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:17.638425   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:17.638634   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637807   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:57.637988   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637996   64758 kubeadm.go:310] 
	I0804 00:22:57.638035   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:22:57.638079   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:22:57.638085   64758 kubeadm.go:310] 
	I0804 00:22:57.638118   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:22:57.638148   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:22:57.638288   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:22:57.638309   64758 kubeadm.go:310] 
	I0804 00:22:57.638426   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:22:57.638507   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:22:57.638619   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:22:57.638640   64758 kubeadm.go:310] 
	I0804 00:22:57.638829   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:22:57.638944   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:22:57.638959   64758 kubeadm.go:310] 
	I0804 00:22:57.639107   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:22:57.639191   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:22:57.639300   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:22:57.639399   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:22:57.639412   64758 kubeadm.go:310] 
	I0804 00:22:57.639782   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:22:57.639904   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:22:57.640012   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:22:57.640091   64758 kubeadm.go:394] duration metric: took 8m3.172057529s to StartCluster
	I0804 00:22:57.640138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:22:57.640202   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:22:57.684020   64758 cri.go:89] found id: ""
	I0804 00:22:57.684054   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.684064   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:22:57.684072   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:22:57.684134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:22:57.722756   64758 cri.go:89] found id: ""
	I0804 00:22:57.722780   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.722788   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:22:57.722793   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:22:57.722851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:22:57.760371   64758 cri.go:89] found id: ""
	I0804 00:22:57.760400   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.760412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:22:57.760419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:22:57.760476   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:22:57.796114   64758 cri.go:89] found id: ""
	I0804 00:22:57.796144   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.796155   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:22:57.796162   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:22:57.796211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:22:57.842148   64758 cri.go:89] found id: ""
	I0804 00:22:57.842179   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.842191   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:22:57.842198   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:22:57.842286   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:22:57.914193   64758 cri.go:89] found id: ""
	I0804 00:22:57.914218   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.914229   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:22:57.914236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:22:57.914290   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:22:57.965944   64758 cri.go:89] found id: ""
	I0804 00:22:57.965973   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.965984   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:22:57.965991   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:22:57.966063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:22:58.003016   64758 cri.go:89] found id: ""
	I0804 00:22:58.003044   64758 logs.go:276] 0 containers: []
	W0804 00:22:58.003055   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:22:58.003066   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:22:58.003093   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:22:58.017277   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:22:58.017304   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:22:58.094192   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:22:58.094214   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:22:58.094227   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:22:58.210901   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:22:58.210944   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:22:58.249283   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:22:58.249317   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 00:22:58.300998   64758 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0804 00:22:58.301054   64758 out.go:239] * 
	* 
	W0804 00:22:58.301115   64758 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.301137   64758 out.go:239] * 
	* 
	W0804 00:22:58.301978   64758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:22:58.305305   64758 out.go:177] 
	W0804 00:22:58.306722   64758 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.306816   64758 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0804 00:22:58.306848   64758 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0804 00:22:58.308372   64758 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-576210 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 2 (231.984719ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-576210 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-576210 logs -n 25: (1.738770391s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-551054                                 | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302198                           | kubernetes-upgrade-302198    | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-551054 sudo                            | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-551054                                 | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-877598            | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-705918                              | cert-expiration-705918       | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-423330 | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	|         | disable-driver-mounts-423330                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:09 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-118016             | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC | 04 Aug 24 00:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-576210        | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-969068  | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-877598                 | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-576210             | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-118016                  | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-969068       | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:11:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:11:52.361065   65441 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:11:52.361334   65441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:11:52.361345   65441 out.go:304] Setting ErrFile to fd 2...
	I0804 00:11:52.361349   65441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:11:52.361548   65441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:11:52.362087   65441 out.go:298] Setting JSON to false
	I0804 00:11:52.363002   65441 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6856,"bootTime":1722723456,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:11:52.363061   65441 start.go:139] virtualization: kvm guest
	I0804 00:11:52.365345   65441 out.go:177] * [default-k8s-diff-port-969068] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:11:52.367170   65441 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:11:52.367161   65441 notify.go:220] Checking for updates...
	I0804 00:11:52.369837   65441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:11:52.371134   65441 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:11:52.372226   65441 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:11:52.373445   65441 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:11:52.374802   65441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:11:52.376375   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:11:52.376787   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:11:52.376859   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:11:52.392495   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0804 00:11:52.392954   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:11:52.393477   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:11:52.393497   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:11:52.393883   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:11:52.394048   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:11:52.394313   65441 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:11:52.394606   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:11:52.394638   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:11:52.409194   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0804 00:11:52.409594   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:11:52.410032   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:11:52.410050   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:11:52.410358   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:11:52.410529   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:11:52.445480   65441 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:11:52.446679   65441 start.go:297] selected driver: kvm2
	I0804 00:11:52.446694   65441 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:11:52.446827   65441 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:11:52.447792   65441 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:11:52.447886   65441 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:11:52.462893   65441 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:11:52.463275   65441 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:11:52.463306   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:11:52.463316   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:11:52.463368   65441 start.go:340] cluster config:
	{Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:11:52.463486   65441 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:11:52.465374   65441 out.go:177] * Starting "default-k8s-diff-port-969068" primary control-plane node in "default-k8s-diff-port-969068" cluster
	I0804 00:11:52.466656   65441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:11:52.466698   65441 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:11:52.466710   65441 cache.go:56] Caching tarball of preloaded images
	I0804 00:11:52.466790   65441 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:11:52.466801   65441 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:11:52.466901   65441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:11:52.467100   65441 start.go:360] acquireMachinesLock for default-k8s-diff-port-969068: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:11:55.809602   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:11:58.881666   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:04.961665   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:08.033617   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:14.113634   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:17.185623   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:23.265618   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:26.337594   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:32.417583   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:35.489705   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:41.569654   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:44.641653   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:50.721640   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:53.793649   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:59.873643   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:02.945676   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:09.025652   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:12.097647   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:18.177740   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:21.249606   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:27.329637   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:30.401648   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:36.481588   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:39.553638   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:45.633633   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:48.705646   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:54.785636   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:57.857662   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:03.937643   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:07.009557   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:13.089694   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:16.161619   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:22.241650   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:25.313612   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:28.318586   64758 start.go:364] duration metric: took 4m16.324186239s to acquireMachinesLock for "old-k8s-version-576210"
	I0804 00:14:28.318635   64758 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:28.318646   64758 fix.go:54] fixHost starting: 
	I0804 00:14:28.319092   64758 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:28.319128   64758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:28.334850   64758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0804 00:14:28.335321   64758 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:28.335817   64758 main.go:141] libmachine: Using API Version  1
	I0804 00:14:28.335848   64758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:28.336204   64758 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:28.336435   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:28.336622   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetState
	I0804 00:14:28.338146   64758 fix.go:112] recreateIfNeeded on old-k8s-version-576210: state=Stopped err=<nil>
	I0804 00:14:28.338166   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	W0804 00:14:28.338322   64758 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:28.340640   64758 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-576210" ...
	I0804 00:14:28.315605   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:28.315642   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:14:28.316035   64502 buildroot.go:166] provisioning hostname "embed-certs-877598"
	I0804 00:14:28.316073   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:14:28.316325   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:14:28.318440   64502 machine.go:97] duration metric: took 4m37.42620041s to provisionDockerMachine
	I0804 00:14:28.318477   64502 fix.go:56] duration metric: took 4m37.448052873s for fixHost
	I0804 00:14:28.318485   64502 start.go:83] releasing machines lock for "embed-certs-877598", held for 4m37.44807127s
	W0804 00:14:28.318509   64502 start.go:714] error starting host: provision: host is not running
	W0804 00:14:28.318594   64502 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0804 00:14:28.318606   64502 start.go:729] Will try again in 5 seconds ...
	I0804 00:14:28.342217   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .Start
	I0804 00:14:28.342401   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring networks are active...
	I0804 00:14:28.343274   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network default is active
	I0804 00:14:28.343761   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network mk-old-k8s-version-576210 is active
	I0804 00:14:28.344268   64758 main.go:141] libmachine: (old-k8s-version-576210) Getting domain xml...
	I0804 00:14:28.345080   64758 main.go:141] libmachine: (old-k8s-version-576210) Creating domain...
	I0804 00:14:29.575420   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting to get IP...
	I0804 00:14:29.576307   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.576754   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.576842   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.576711   66003 retry.go:31] will retry after 272.821874ms: waiting for machine to come up
	I0804 00:14:29.851363   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.851951   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.851976   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.851895   66003 retry.go:31] will retry after 247.116514ms: waiting for machine to come up
	I0804 00:14:30.100479   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.100883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.100916   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.100833   66003 retry.go:31] will retry after 353.251065ms: waiting for machine to come up
	I0804 00:14:30.455526   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.455975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.456004   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.455933   66003 retry.go:31] will retry after 558.071575ms: waiting for machine to come up
	I0804 00:14:31.015539   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.015974   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.016000   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.015917   66003 retry.go:31] will retry after 514.757536ms: waiting for machine to come up
	I0804 00:14:31.532799   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.533232   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.533250   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.533186   66003 retry.go:31] will retry after 607.548546ms: waiting for machine to come up
	I0804 00:14:33.318807   64502 start.go:360] acquireMachinesLock for embed-certs-877598: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:14:32.142162   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:32.142658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:32.142693   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:32.142610   66003 retry.go:31] will retry after 897.977595ms: waiting for machine to come up
	I0804 00:14:33.042628   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:33.043002   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:33.043028   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:33.042966   66003 retry.go:31] will retry after 1.094117762s: waiting for machine to come up
	I0804 00:14:34.138946   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:34.139459   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:34.139485   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:34.139414   66003 retry.go:31] will retry after 1.435055372s: waiting for machine to come up
	I0804 00:14:35.576253   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:35.576603   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:35.576625   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:35.576547   66003 retry.go:31] will retry after 1.688006591s: waiting for machine to come up
	I0804 00:14:37.265928   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:37.266429   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:37.266456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:37.266371   66003 retry.go:31] will retry after 2.356818801s: waiting for machine to come up
	I0804 00:14:39.624408   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:39.624832   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:39.624863   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:39.624775   66003 retry.go:31] will retry after 2.41856098s: waiting for machine to come up
	I0804 00:14:46.442402   65087 start.go:364] duration metric: took 3m44.405576801s to acquireMachinesLock for "no-preload-118016"
	I0804 00:14:46.442459   65087 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:46.442469   65087 fix.go:54] fixHost starting: 
	I0804 00:14:46.442938   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:46.442975   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:46.459944   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0804 00:14:46.460375   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:46.460851   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:14:46.460871   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:46.461211   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:46.461402   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:14:46.461538   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:14:46.463097   65087 fix.go:112] recreateIfNeeded on no-preload-118016: state=Stopped err=<nil>
	I0804 00:14:46.463126   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	W0804 00:14:46.463282   65087 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:46.465711   65087 out.go:177] * Restarting existing kvm2 VM for "no-preload-118016" ...
	I0804 00:14:42.044498   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:42.044855   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:42.044882   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:42.044822   66003 retry.go:31] will retry after 3.111190148s: waiting for machine to come up
	I0804 00:14:45.158161   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.158688   64758 main.go:141] libmachine: (old-k8s-version-576210) Found IP for machine: 192.168.72.154
	I0804 00:14:45.158709   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserving static IP address...
	I0804 00:14:45.158719   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has current primary IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.159112   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.159138   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | skip adding static IP to network mk-old-k8s-version-576210 - found existing host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"}
	I0804 00:14:45.159151   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserved static IP address: 192.168.72.154
	I0804 00:14:45.159163   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting for SSH to be available...
	I0804 00:14:45.159172   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Getting to WaitForSSH function...
	I0804 00:14:45.161469   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161782   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.161812   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161936   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH client type: external
	I0804 00:14:45.161975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa (-rw-------)
	I0804 00:14:45.162015   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:14:45.162034   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | About to run SSH command:
	I0804 00:14:45.162044   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | exit 0
	I0804 00:14:45.281546   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | SSH cmd err, output: <nil>: 
	I0804 00:14:45.281859   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetConfigRaw
	I0804 00:14:45.282574   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.284998   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285386   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.285414   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285614   64758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/config.json ...
	I0804 00:14:45.285806   64758 machine.go:94] provisionDockerMachine start ...
	I0804 00:14:45.285823   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:45.286098   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.288285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288640   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.288668   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288753   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.288931   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289088   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289253   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.289426   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.289628   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.289640   64758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:14:45.386001   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:14:45.386036   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386325   64758 buildroot.go:166] provisioning hostname "old-k8s-version-576210"
	I0804 00:14:45.386348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386536   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.389316   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389718   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.389739   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389948   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.390122   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390285   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390415   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.390557   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.390758   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.390776   64758 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-576210 && echo "old-k8s-version-576210" | sudo tee /etc/hostname
	I0804 00:14:45.499644   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-576210
	
	I0804 00:14:45.499695   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.502583   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.502935   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.502959   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.503123   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.503318   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503456   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503570   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.503729   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.503898   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.503915   64758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-576210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-576210/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-576210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:14:45.606971   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:45.607003   64758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:14:45.607045   64758 buildroot.go:174] setting up certificates
	I0804 00:14:45.607053   64758 provision.go:84] configureAuth start
	I0804 00:14:45.607062   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.607327   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.610009   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610378   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.610407   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610545   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.612549   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.612876   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.612908   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.613071   64758 provision.go:143] copyHostCerts
	I0804 00:14:45.613134   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:14:45.613147   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:14:45.613231   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:14:45.613343   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:14:45.613368   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:14:45.613410   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:14:45.613491   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:14:45.613501   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:14:45.613535   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:14:45.613609   64758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-576210 san=[127.0.0.1 192.168.72.154 localhost minikube old-k8s-version-576210]
	I0804 00:14:45.794221   64758 provision.go:177] copyRemoteCerts
	I0804 00:14:45.794276   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:14:45.794299   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.796859   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797182   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.797225   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.797555   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.797687   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.797804   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:45.875704   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:14:45.903765   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0804 00:14:45.930101   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:14:45.955639   64758 provision.go:87] duration metric: took 348.556108ms to configureAuth
	I0804 00:14:45.955668   64758 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:14:45.955874   64758 config.go:182] Loaded profile config "old-k8s-version-576210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:14:45.955960   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.958487   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958835   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.958950   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958970   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.959193   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.959616   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.959789   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.959810   64758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:14:46.217683   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:14:46.217725   64758 machine.go:97] duration metric: took 931.901933ms to provisionDockerMachine
	I0804 00:14:46.217742   64758 start.go:293] postStartSetup for "old-k8s-version-576210" (driver="kvm2")
	I0804 00:14:46.217758   64758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:14:46.217787   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.218127   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:14:46.218151   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.220834   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221148   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.221170   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221342   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.221576   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.221733   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.221867   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.300102   64758 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:14:46.304434   64758 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:14:46.304464   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:14:46.304538   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:14:46.304631   64758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:14:46.304747   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:14:46.314378   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:46.339057   64758 start.go:296] duration metric: took 121.299069ms for postStartSetup
	I0804 00:14:46.339105   64758 fix.go:56] duration metric: took 18.020458894s for fixHost
	I0804 00:14:46.339129   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.341883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342258   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.342285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.342688   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342856   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342992   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.343161   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:46.343385   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:46.343400   64758 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:14:46.442247   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730486.414818212
	
	I0804 00:14:46.442275   64758 fix.go:216] guest clock: 1722730486.414818212
	I0804 00:14:46.442288   64758 fix.go:229] Guest: 2024-08-04 00:14:46.414818212 +0000 UTC Remote: 2024-08-04 00:14:46.339109981 +0000 UTC m=+274.490542023 (delta=75.708231ms)
	I0804 00:14:46.442313   64758 fix.go:200] guest clock delta is within tolerance: 75.708231ms
	I0804 00:14:46.442319   64758 start.go:83] releasing machines lock for "old-k8s-version-576210", held for 18.123699316s
	I0804 00:14:46.442347   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.442656   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:46.445456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.445865   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.445892   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.446069   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446577   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446743   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446816   64758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:14:46.446850   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.446965   64758 ssh_runner.go:195] Run: cat /version.json
	I0804 00:14:46.446987   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.449576   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449794   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449953   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.449983   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450178   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450265   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.450317   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450384   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450520   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450605   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450667   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450733   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.450780   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450910   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.534686   64758 ssh_runner.go:195] Run: systemctl --version
	I0804 00:14:46.554270   64758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:14:46.708220   64758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:14:46.714541   64758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:14:46.714607   64758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:14:46.731642   64758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:14:46.731668   64758 start.go:495] detecting cgroup driver to use...
	I0804 00:14:46.731739   64758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:14:46.748782   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:14:46.763556   64758 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:14:46.763640   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:14:46.778075   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:14:46.793133   64758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:14:46.466927   65087 main.go:141] libmachine: (no-preload-118016) Calling .Start
	I0804 00:14:46.467081   65087 main.go:141] libmachine: (no-preload-118016) Ensuring networks are active...
	I0804 00:14:46.467696   65087 main.go:141] libmachine: (no-preload-118016) Ensuring network default is active
	I0804 00:14:46.468023   65087 main.go:141] libmachine: (no-preload-118016) Ensuring network mk-no-preload-118016 is active
	I0804 00:14:46.468344   65087 main.go:141] libmachine: (no-preload-118016) Getting domain xml...
	I0804 00:14:46.468932   65087 main.go:141] libmachine: (no-preload-118016) Creating domain...
	I0804 00:14:46.918377   64758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:14:47.059683   64758 docker.go:233] disabling docker service ...
	I0804 00:14:47.059753   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:14:47.074819   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:14:47.092184   64758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:14:47.235274   64758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:14:47.357937   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:14:47.375273   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:14:47.395182   64758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0804 00:14:47.395236   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.407036   64758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:14:47.407092   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.418562   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.434481   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.447488   64758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:14:47.460242   64758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:14:47.471089   64758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:14:47.471143   64758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:14:47.486698   64758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:14:47.498754   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:47.630867   64758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:14:47.796598   64758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:14:47.796690   64758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:14:47.802302   64758 start.go:563] Will wait 60s for crictl version
	I0804 00:14:47.802364   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:47.806368   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:14:47.847588   64758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:14:47.847679   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.877936   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.908229   64758 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0804 00:14:47.909635   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:47.912658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913102   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:47.913130   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913438   64758 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0804 00:14:47.917910   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:47.931201   64758 kubeadm.go:883] updating cluster {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:14:47.931318   64758 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:14:47.931381   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:47.980001   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:47.980071   64758 ssh_runner.go:195] Run: which lz4
	I0804 00:14:47.984277   64758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:14:47.988781   64758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:14:47.988810   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0804 00:14:49.706968   64758 crio.go:462] duration metric: took 1.722721175s to copy over tarball
	I0804 00:14:49.707059   64758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:14:47.715321   65087 main.go:141] libmachine: (no-preload-118016) Waiting to get IP...
	I0804 00:14:47.716397   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:47.716853   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:47.716889   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:47.716820   66120 retry.go:31] will retry after 187.841432ms: waiting for machine to come up
	I0804 00:14:47.906481   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:47.906984   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:47.907018   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:47.906942   66120 retry.go:31] will retry after 389.569097ms: waiting for machine to come up
	I0804 00:14:48.298691   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:48.299997   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:48.300021   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:48.299947   66120 retry.go:31] will retry after 382.905254ms: waiting for machine to come up
	I0804 00:14:48.684628   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:48.685095   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:48.685127   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:48.685066   66120 retry.go:31] will retry after 526.267085ms: waiting for machine to come up
	I0804 00:14:49.213459   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:49.214180   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:49.214203   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:49.214142   66120 retry.go:31] will retry after 666.253139ms: waiting for machine to come up
	I0804 00:14:49.882141   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:49.882610   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:49.882639   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:49.882560   66120 retry.go:31] will retry after 776.560525ms: waiting for machine to come up
	I0804 00:14:50.660679   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:50.661149   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:50.661177   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:50.661105   66120 retry.go:31] will retry after 825.927722ms: waiting for machine to come up
	I0804 00:14:51.488562   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:51.488937   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:51.488964   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:51.488894   66120 retry.go:31] will retry after 1.210535859s: waiting for machine to come up
	I0804 00:14:52.511242   64758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.804147671s)
	I0804 00:14:52.511275   64758 crio.go:469] duration metric: took 2.804279705s to extract the tarball
	I0804 00:14:52.511285   64758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:14:52.553905   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:52.587405   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:52.587429   64758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:14:52.587496   64758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.587513   64758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.587550   64758 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.587551   64758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.587554   64758 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.587567   64758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.587570   64758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.587577   64758 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.589240   64758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.589239   64758 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.589247   64758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.589211   64758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.589287   64758 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589579   64758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.742969   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.766505   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.782813   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0804 00:14:52.788509   64758 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0804 00:14:52.788553   64758 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.788598   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.823108   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.829531   64758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0804 00:14:52.829577   64758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.829648   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.858209   64758 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0804 00:14:52.858238   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.858245   64758 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0804 00:14:52.858288   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.888665   64758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0804 00:14:52.888717   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.888748   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0804 00:14:52.888717   64758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.888794   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.918127   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.921386   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0804 00:14:52.929839   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.977866   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0804 00:14:52.977919   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.977960   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0804 00:14:52.994379   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.003198   64758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0804 00:14:53.003233   64758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.003273   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.056310   64758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0804 00:14:53.056338   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0804 00:14:53.056357   64758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.056403   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.062077   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.062119   64758 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0804 00:14:53.062161   64758 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.062206   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.064260   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.114709   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0804 00:14:53.114758   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.118375   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0804 00:14:53.147635   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0804 00:14:53.497155   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:53.647242   64758 cache_images.go:92] duration metric: took 1.059794593s to LoadCachedImages
	W0804 00:14:53.647353   64758 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0804 00:14:53.647370   64758 kubeadm.go:934] updating node { 192.168.72.154 8443 v1.20.0 crio true true} ...
	I0804 00:14:53.647507   64758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-576210 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:14:53.647586   64758 ssh_runner.go:195] Run: crio config
	I0804 00:14:53.710377   64758 cni.go:84] Creating CNI manager for ""
	I0804 00:14:53.710399   64758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:14:53.710411   64758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:14:53.710437   64758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.154 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-576210 NodeName:old-k8s-version-576210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0804 00:14:53.710583   64758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-576210"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:14:53.710661   64758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0804 00:14:53.721942   64758 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:14:53.722005   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:14:53.732623   64758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0804 00:14:53.749878   64758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:14:53.767147   64758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0804 00:14:53.785522   64758 ssh_runner.go:195] Run: grep 192.168.72.154	control-plane.minikube.internal$ /etc/hosts
	I0804 00:14:53.789438   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:53.802152   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:53.934508   64758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:14:53.952247   64758 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210 for IP: 192.168.72.154
	I0804 00:14:53.952280   64758 certs.go:194] generating shared ca certs ...
	I0804 00:14:53.952301   64758 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:53.952470   64758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:14:53.952523   64758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:14:53.952536   64758 certs.go:256] generating profile certs ...
	I0804 00:14:53.952658   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.key
	I0804 00:14:53.952730   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key.5357f842
	I0804 00:14:53.952783   64758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key
	I0804 00:14:53.952948   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:14:53.953000   64758 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:14:53.953013   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:14:53.953048   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:14:53.953084   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:14:53.953114   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:14:53.953191   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:53.954013   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:14:54.001446   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:14:54.029628   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:14:54.062713   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:14:54.090711   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0804 00:14:54.117970   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:14:54.163691   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:14:54.190151   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:14:54.219334   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:14:54.244677   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:14:54.269795   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:14:54.294949   64758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:14:54.312330   64758 ssh_runner.go:195] Run: openssl version
	I0804 00:14:54.318320   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:14:54.328932   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333686   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333737   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.341330   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:14:54.356008   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:14:54.368966   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373896   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373954   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.379770   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:14:54.390903   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:14:54.402637   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407296   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407362   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.413215   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:14:54.424473   64758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:14:54.429673   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:14:54.436038   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:14:54.442091   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:14:54.448507   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:14:54.455421   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:14:54.461969   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:14:54.468042   64758 kubeadm.go:392] StartCluster: {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:14:54.468151   64758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:14:54.468208   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.508109   64758 cri.go:89] found id: ""
	I0804 00:14:54.508183   64758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:14:54.518712   64758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:14:54.518736   64758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:14:54.518788   64758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:14:54.528545   64758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:14:54.529780   64758 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-576210" does not appear in /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:14:54.530411   64758 kubeconfig.go:62] /home/jenkins/minikube-integration/19364-9607/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-576210" cluster setting kubeconfig missing "old-k8s-version-576210" context setting]
	I0804 00:14:54.531316   64758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:54.550431   64758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:14:54.561047   64758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.154
	I0804 00:14:54.561086   64758 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:14:54.561108   64758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:14:54.561163   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.597213   64758 cri.go:89] found id: ""
	I0804 00:14:54.597282   64758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:14:54.612914   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:14:54.622533   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:14:54.622562   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:14:54.622613   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:14:54.632746   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:14:54.632812   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:14:54.642197   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:14:54.651204   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:14:54.651268   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:14:54.660496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.669448   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:14:54.669512   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.678773   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:14:54.687854   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:14:54.687902   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:14:54.697066   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:14:54.707036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:54.840553   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.551919   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.790500   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.898210   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.995621   64758 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:14:55.995711   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:56.496072   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:52.701200   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:52.701574   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:52.701598   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:52.701547   66120 retry.go:31] will retry after 1.518623613s: waiting for machine to come up
	I0804 00:14:54.221367   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:54.221886   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:54.221916   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:54.221835   66120 retry.go:31] will retry after 1.869121058s: waiting for machine to come up
	I0804 00:14:56.092101   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:56.092527   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:56.092550   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:56.092488   66120 retry.go:31] will retry after 2.071227436s: waiting for machine to come up
	I0804 00:14:56.995965   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.496285   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.995805   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.496549   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.996224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.496360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.996056   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:01.496435   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.166383   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:58.166760   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:58.166807   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:58.166729   66120 retry.go:31] will retry after 2.352991709s: waiting for machine to come up
	I0804 00:15:00.522153   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:00.522630   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:15:00.522657   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:15:00.522584   66120 retry.go:31] will retry after 3.326179831s: waiting for machine to come up
	I0804 00:15:05.170439   65441 start.go:364] duration metric: took 3m12.703297591s to acquireMachinesLock for "default-k8s-diff-port-969068"
	I0804 00:15:05.170512   65441 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:15:05.170520   65441 fix.go:54] fixHost starting: 
	I0804 00:15:05.170935   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:05.170974   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:05.188546   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0804 00:15:05.188997   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:05.189494   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:05.189518   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:05.189933   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:05.190132   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:05.190276   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:05.191653   65441 fix.go:112] recreateIfNeeded on default-k8s-diff-port-969068: state=Stopped err=<nil>
	I0804 00:15:05.191684   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	W0804 00:15:05.191834   65441 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:15:05.194275   65441 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-969068" ...
	I0804 00:15:01.996148   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.496756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.996430   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.496646   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.996707   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.496772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.995997   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.496651   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.996384   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.496403   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.850063   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.850518   65087 main.go:141] libmachine: (no-preload-118016) Found IP for machine: 192.168.61.137
	I0804 00:15:03.850544   65087 main.go:141] libmachine: (no-preload-118016) Reserving static IP address...
	I0804 00:15:03.850559   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has current primary IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.850970   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "no-preload-118016", mac: "52:54:00:be:41:20", ip: "192.168.61.137"} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.851001   65087 main.go:141] libmachine: (no-preload-118016) DBG | skip adding static IP to network mk-no-preload-118016 - found existing host DHCP lease matching {name: "no-preload-118016", mac: "52:54:00:be:41:20", ip: "192.168.61.137"}
	I0804 00:15:03.851015   65087 main.go:141] libmachine: (no-preload-118016) Reserved static IP address: 192.168.61.137
	I0804 00:15:03.851030   65087 main.go:141] libmachine: (no-preload-118016) Waiting for SSH to be available...
	I0804 00:15:03.851048   65087 main.go:141] libmachine: (no-preload-118016) DBG | Getting to WaitForSSH function...
	I0804 00:15:03.853316   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.853676   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.853705   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.853819   65087 main.go:141] libmachine: (no-preload-118016) DBG | Using SSH client type: external
	I0804 00:15:03.853850   65087 main.go:141] libmachine: (no-preload-118016) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa (-rw-------)
	I0804 00:15:03.853886   65087 main.go:141] libmachine: (no-preload-118016) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:03.853901   65087 main.go:141] libmachine: (no-preload-118016) DBG | About to run SSH command:
	I0804 00:15:03.853913   65087 main.go:141] libmachine: (no-preload-118016) DBG | exit 0
	I0804 00:15:03.981414   65087 main.go:141] libmachine: (no-preload-118016) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:03.981807   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetConfigRaw
	I0804 00:15:03.982419   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:03.985062   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.985400   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.985433   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.985674   65087 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/config.json ...
	I0804 00:15:03.985857   65087 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:03.985873   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:03.986090   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:03.988490   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.988798   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.988826   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.989017   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:03.989183   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:03.989342   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:03.989510   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:03.989697   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:03.989916   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:03.989927   65087 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:04.106042   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:04.106090   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.106372   65087 buildroot.go:166] provisioning hostname "no-preload-118016"
	I0804 00:15:04.106398   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.106594   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.109434   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.109777   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.109803   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.109919   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.110092   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.110248   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.110423   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.110582   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.110749   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.110764   65087 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-118016 && echo "no-preload-118016" | sudo tee /etc/hostname
	I0804 00:15:04.239856   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-118016
	
	I0804 00:15:04.239884   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.242877   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.243241   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.243271   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.243486   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.243712   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.243897   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.244046   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.244232   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.244420   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.244443   65087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-118016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-118016/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-118016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:04.367259   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:04.367289   65087 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:04.367330   65087 buildroot.go:174] setting up certificates
	I0804 00:15:04.367340   65087 provision.go:84] configureAuth start
	I0804 00:15:04.367432   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.367848   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:04.370330   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.370630   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.370658   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.370744   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.372799   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.373175   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.373203   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.373308   65087 provision.go:143] copyHostCerts
	I0804 00:15:04.373386   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:04.373399   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:04.373458   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:04.373557   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:04.373565   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:04.373585   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:04.373651   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:04.373657   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:04.373675   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:04.373732   65087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.no-preload-118016 san=[127.0.0.1 192.168.61.137 localhost minikube no-preload-118016]
	I0804 00:15:04.467261   65087 provision.go:177] copyRemoteCerts
	I0804 00:15:04.467322   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:04.467347   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.469843   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.470126   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.470154   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.470297   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.470478   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.470644   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.470761   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:04.559980   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:04.585701   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 00:15:04.610270   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:15:04.633954   65087 provision.go:87] duration metric: took 266.53536ms to configureAuth
	I0804 00:15:04.633981   65087 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:04.634154   65087 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:15:04.634219   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.636880   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.637243   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.637271   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.637452   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.637664   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.637823   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.637921   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.638060   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.638234   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.638250   65087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:04.916045   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:04.916077   65087 machine.go:97] duration metric: took 930.20802ms to provisionDockerMachine
	I0804 00:15:04.916088   65087 start.go:293] postStartSetup for "no-preload-118016" (driver="kvm2")
	I0804 00:15:04.916100   65087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:04.916113   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:04.916429   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:04.916453   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.919155   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.919485   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.919514   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.919657   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.919859   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.920026   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.920166   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.012754   65087 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:05.017004   65087 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:05.017024   65087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:05.017091   65087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:05.017180   65087 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:05.017293   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:05.026980   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:05.051265   65087 start.go:296] duration metric: took 135.164451ms for postStartSetup
	I0804 00:15:05.051309   65087 fix.go:56] duration metric: took 18.608839754s for fixHost
	I0804 00:15:05.051331   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.054286   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.054683   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.054710   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.054876   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.055127   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.055321   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.055485   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.055668   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:05.055870   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:05.055882   65087 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:05.170285   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730505.141206116
	
	I0804 00:15:05.170314   65087 fix.go:216] guest clock: 1722730505.141206116
	I0804 00:15:05.170321   65087 fix.go:229] Guest: 2024-08-04 00:15:05.141206116 +0000 UTC Remote: 2024-08-04 00:15:05.051313292 +0000 UTC m=+243.154971169 (delta=89.892824ms)
	I0804 00:15:05.170341   65087 fix.go:200] guest clock delta is within tolerance: 89.892824ms
	I0804 00:15:05.170359   65087 start.go:83] releasing machines lock for "no-preload-118016", held for 18.727925423s
	I0804 00:15:05.170392   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.170673   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:05.173694   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.174084   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.174117   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.174265   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.174828   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.175015   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.175103   65087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:05.175145   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.175263   65087 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:05.175286   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.177906   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178280   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.178307   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178329   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178470   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.178688   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.178777   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.178832   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178854   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.178945   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.179025   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.179111   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.179265   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.179417   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.282397   65087 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:05.288682   65087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:05.434388   65087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:05.440857   65087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:05.440937   65087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:05.461853   65087 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:05.461879   65087 start.go:495] detecting cgroup driver to use...
	I0804 00:15:05.461944   65087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:05.478397   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:05.494093   65087 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:05.494151   65087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:05.509391   65087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:05.524127   65087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:05.640185   65087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:05.784994   65087 docker.go:233] disabling docker service ...
	I0804 00:15:05.785071   65087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:05.802802   65087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:05.818424   65087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:05.970147   65087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:06.099759   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:06.114434   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:06.132989   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:06.433914   65087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0804 00:15:06.433969   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.452155   65087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:06.452245   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.464730   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.475848   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.488341   65087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:06.501984   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.514776   65087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.534773   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.547076   65087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:06.558639   65087 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:06.558695   65087 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:06.572920   65087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:06.583298   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:06.705307   65087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:06.845776   65087 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:06.845840   65087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:06.851710   65087 start.go:563] Will wait 60s for crictl version
	I0804 00:15:06.851764   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:06.855899   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:06.904392   65087 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:06.904493   65087 ssh_runner.go:195] Run: crio --version
	I0804 00:15:06.932866   65087 ssh_runner.go:195] Run: crio --version
	I0804 00:15:06.963071   65087 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0804 00:15:05.195984   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Start
	I0804 00:15:05.196175   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring networks are active...
	I0804 00:15:05.196904   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring network default is active
	I0804 00:15:05.197256   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring network mk-default-k8s-diff-port-969068 is active
	I0804 00:15:05.197709   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Getting domain xml...
	I0804 00:15:05.198474   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Creating domain...
	I0804 00:15:06.489009   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting to get IP...
	I0804 00:15:06.490137   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.490569   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.490641   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:06.490549   66290 retry.go:31] will retry after 298.701839ms: waiting for machine to come up
	I0804 00:15:06.791467   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.791938   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.791960   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:06.791894   66290 retry.go:31] will retry after 373.395742ms: waiting for machine to come up
	I0804 00:15:07.166622   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.167108   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.167139   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:07.167048   66290 retry.go:31] will retry after 404.799649ms: waiting for machine to come up
	I0804 00:15:06.995779   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.495822   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.995970   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.495870   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.996379   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.495852   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.495912   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.996591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:11.495964   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.964314   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:06.967088   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:06.967517   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:06.967547   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:06.967787   65087 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:06.973133   65087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:06.990153   65087 kubeadm.go:883] updating cluster {Name:no-preload-118016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:06.990339   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.297536   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.591746   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.874720   65087 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0804 00:15:07.874798   65087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:07.914104   65087 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0804 00:15:07.914127   65087 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:15:07.914172   65087 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:07.914212   65087 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:07.914237   65087 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0804 00:15:07.914253   65087 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:07.914324   65087 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:07.914374   65087 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:07.914225   65087 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:07.914374   65087 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:07.915814   65087 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:07.915833   65087 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:07.915838   65087 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:07.915816   65087 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:07.915814   65087 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0804 00:15:07.915882   65087 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:07.915962   65087 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:07.916150   65087 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.048225   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.050828   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.051873   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.056880   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.087643   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.091720   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0804 00:15:08.116485   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.173591   65087 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0804 00:15:08.173642   65087 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.173686   65087 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0804 00:15:08.173704   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.173725   65087 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.173777   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.191254   65087 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0804 00:15:08.191298   65087 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.191352   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.195238   65087 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0804 00:15:08.195290   65087 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.195340   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.246005   65087 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0804 00:15:08.246048   65087 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.246100   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.336855   65087 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0804 00:15:08.336936   65087 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.336945   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.336965   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.337078   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.337120   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.337161   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.337207   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.425270   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.425297   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.425296   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:08.425455   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.425522   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:08.458378   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0804 00:15:08.458520   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:08.460719   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:08.460827   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:08.460889   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0804 00:15:08.460983   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:08.492690   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0804 00:15:08.492789   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492808   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.492839   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:08.492852   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.492863   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492932   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492976   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0804 00:15:08.493036   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0804 00:15:08.763401   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:11.063302   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.570424927s)
	I0804 00:15:11.063326   65087 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0: (2.570469177s)
	I0804 00:15:11.063341   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0804 00:15:11.063348   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0804 00:15:11.063355   65087 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:11.063377   65087 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.299939136s)
	I0804 00:15:11.063414   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:11.063438   65087 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0804 00:15:11.063468   65087 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:11.063516   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:07.573639   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.574103   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.574150   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:07.574068   66290 retry.go:31] will retry after 552.033422ms: waiting for machine to come up
	I0804 00:15:08.127755   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.128317   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.128345   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:08.128254   66290 retry.go:31] will retry after 601.661676ms: waiting for machine to come up
	I0804 00:15:08.731160   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.731571   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.731596   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:08.731526   66290 retry.go:31] will retry after 899.954536ms: waiting for machine to come up
	I0804 00:15:09.632769   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:09.633217   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:09.633275   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:09.633188   66290 retry.go:31] will retry after 1.096119877s: waiting for machine to come up
	I0804 00:15:10.731586   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:10.732092   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:10.732116   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:10.732062   66290 retry.go:31] will retry after 1.09033143s: waiting for machine to come up
	I0804 00:15:11.824287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:11.824697   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:11.824723   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:11.824648   66290 retry.go:31] will retry after 1.458040473s: waiting for machine to come up
	I0804 00:15:11.996494   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.496005   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.996429   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.496310   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.996525   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.495995   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.996172   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.495809   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.996016   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:16.496210   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.840723   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.777281435s)
	I0804 00:15:14.840759   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0804 00:15:14.840758   65087 ssh_runner.go:235] Completed: which crictl: (3.777229082s)
	I0804 00:15:14.840769   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:14.840815   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:14.840815   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:14.894482   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0804 00:15:14.894607   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:16.729218   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (1.888374505s)
	I0804 00:15:16.729270   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0804 00:15:16.729277   65087 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.834630766s)
	I0804 00:15:16.729304   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:16.729312   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0804 00:15:16.729368   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:13.284961   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:13.285403   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:13.285435   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:13.285332   66290 retry.go:31] will retry after 2.307816709s: waiting for machine to come up
	I0804 00:15:15.594435   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:15.594855   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:15.594885   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:15.594804   66290 retry.go:31] will retry after 2.83542957s: waiting for machine to come up
	I0804 00:15:16.996765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.496069   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.995828   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.495847   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.996276   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.496155   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.996708   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.996145   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:21.496193   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.031187   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.301792704s)
	I0804 00:15:19.031309   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0804 00:15:19.031343   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:19.031389   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:20.493093   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.461677557s)
	I0804 00:15:20.493134   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0804 00:15:20.493152   65087 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:20.493202   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:18.433690   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:18.434156   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:18.434188   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:18.434105   66290 retry.go:31] will retry after 2.563856777s: waiting for machine to come up
	I0804 00:15:20.999804   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:21.000275   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:21.000307   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:21.000236   66290 retry.go:31] will retry after 3.783170851s: waiting for machine to come up
	I0804 00:15:26.095635   64502 start.go:364] duration metric: took 52.776761645s to acquireMachinesLock for "embed-certs-877598"
	I0804 00:15:26.095695   64502 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:15:26.095703   64502 fix.go:54] fixHost starting: 
	I0804 00:15:26.096104   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:26.096143   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:26.113770   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0804 00:15:26.114303   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:26.114742   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:15:26.114768   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:26.115137   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:26.115330   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:26.115508   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:15:26.117156   64502 fix.go:112] recreateIfNeeded on embed-certs-877598: state=Stopped err=<nil>
	I0804 00:15:26.117179   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	W0804 00:15:26.117343   64502 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:15:26.119743   64502 out.go:177] * Restarting existing kvm2 VM for "embed-certs-877598" ...
	I0804 00:15:21.996520   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.495922   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.995766   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.495923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.995770   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.496788   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.996759   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.996017   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.496445   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.363529   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.870304087s)
	I0804 00:15:22.363559   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0804 00:15:22.363573   65087 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:22.363618   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:23.009879   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0804 00:15:23.009924   65087 cache_images.go:123] Successfully loaded all cached images
	I0804 00:15:23.009932   65087 cache_images.go:92] duration metric: took 15.095790334s to LoadCachedImages
	I0804 00:15:23.009946   65087 kubeadm.go:934] updating node { 192.168.61.137 8443 v1.31.0-rc.0 crio true true} ...
	I0804 00:15:23.010145   65087 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-118016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:23.010230   65087 ssh_runner.go:195] Run: crio config
	I0804 00:15:23.057968   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:15:23.057991   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:23.058002   65087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:23.058022   65087 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.137 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-118016 NodeName:no-preload-118016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:23.058149   65087 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-118016"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:23.058210   65087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0804 00:15:23.068635   65087 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:23.068713   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:23.077867   65087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0804 00:15:23.094220   65087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0804 00:15:23.110798   65087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0804 00:15:23.132230   65087 ssh_runner.go:195] Run: grep 192.168.61.137	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:23.136622   65087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:23.149229   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:23.284623   65087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:23.309115   65087 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016 for IP: 192.168.61.137
	I0804 00:15:23.309212   65087 certs.go:194] generating shared ca certs ...
	I0804 00:15:23.309242   65087 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:23.309451   65087 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:23.309509   65087 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:23.309525   65087 certs.go:256] generating profile certs ...
	I0804 00:15:23.309633   65087 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.key
	I0804 00:15:23.309718   65087 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.key.794a08a1
	I0804 00:15:23.309775   65087 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.key
	I0804 00:15:23.309951   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:23.309992   65087 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:23.310006   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:23.310050   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:23.310084   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:23.310125   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:23.310186   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:23.310811   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:23.346479   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:23.390508   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:23.419626   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:23.453891   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 00:15:23.481597   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:23.507749   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:23.537567   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:15:23.565469   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:23.590844   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:23.618748   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:23.645921   65087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:23.664034   65087 ssh_runner.go:195] Run: openssl version
	I0804 00:15:23.670083   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:23.681080   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.685717   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.685777   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.691573   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:23.702260   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:23.713185   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.717747   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.717803   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.723598   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:23.734445   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:23.745394   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.750239   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.750312   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.756471   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:23.767795   65087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:23.772483   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:23.778613   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:23.784560   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:23.790455   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:23.796260   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:23.802405   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:23.808623   65087 kubeadm.go:392] StartCluster: {Name:no-preload-118016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:23.808710   65087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:23.808753   65087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:23.857908   65087 cri.go:89] found id: ""
	I0804 00:15:23.857983   65087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:23.868694   65087 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:23.868717   65087 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:23.868789   65087 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:23.878826   65087 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:23.879879   65087 kubeconfig.go:125] found "no-preload-118016" server: "https://192.168.61.137:8443"
	I0804 00:15:23.882653   65087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:23.893441   65087 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.137
	I0804 00:15:23.893475   65087 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:23.893489   65087 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:23.893533   65087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:23.933954   65087 cri.go:89] found id: ""
	I0804 00:15:23.934026   65087 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:23.951080   65087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:23.962250   65087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:23.962274   65087 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:23.962327   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:15:23.971760   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:23.971817   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:23.981767   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:15:23.991443   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:23.991494   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:24.001911   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:15:24.011927   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:24.011988   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:24.022349   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:15:24.032305   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:24.032371   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:24.042416   65087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:24.052403   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:24.163413   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.106900   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.323496   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.410928   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.569137   65087 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:25.569221   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.069288   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.570343   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.615965   65087 api_server.go:72] duration metric: took 1.046825245s to wait for apiserver process to appear ...
	I0804 00:15:26.615997   65087 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:26.616022   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:26.616618   65087 api_server.go:269] stopped: https://192.168.61.137:8443/healthz: Get "https://192.168.61.137:8443/healthz": dial tcp 192.168.61.137:8443: connect: connection refused
	I0804 00:15:24.788329   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.788775   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Found IP for machine: 192.168.39.132
	I0804 00:15:24.788799   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has current primary IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.788811   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Reserving static IP address...
	I0804 00:15:24.789238   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-969068", mac: "52:54:00:60:ac:10", ip: "192.168.39.132"} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.789266   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | skip adding static IP to network mk-default-k8s-diff-port-969068 - found existing host DHCP lease matching {name: "default-k8s-diff-port-969068", mac: "52:54:00:60:ac:10", ip: "192.168.39.132"}
	I0804 00:15:24.789287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Reserved static IP address: 192.168.39.132
	I0804 00:15:24.789303   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for SSH to be available...
	I0804 00:15:24.789333   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Getting to WaitForSSH function...
	I0804 00:15:24.791371   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.791734   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.791762   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.791904   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Using SSH client type: external
	I0804 00:15:24.791934   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa (-rw-------)
	I0804 00:15:24.791975   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:24.791994   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | About to run SSH command:
	I0804 00:15:24.792010   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | exit 0
	I0804 00:15:24.921420   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:24.921795   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetConfigRaw
	I0804 00:15:24.922375   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:24.925074   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.925403   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.925431   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.925680   65441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:15:24.925904   65441 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:24.925924   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:24.926120   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:24.928597   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.929006   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.929045   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.929171   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:24.929334   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:24.929498   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:24.929634   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:24.929814   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:24.930001   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:24.930012   65441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:25.046325   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:25.046355   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.046703   65441 buildroot.go:166] provisioning hostname "default-k8s-diff-port-969068"
	I0804 00:15:25.046733   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.046940   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.049807   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.050383   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.050427   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.050547   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.050739   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.050937   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.051131   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.051296   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.051504   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.051525   65441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-969068 && echo "default-k8s-diff-port-969068" | sudo tee /etc/hostname
	I0804 00:15:25.182512   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-969068
	
	I0804 00:15:25.182552   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.185673   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.186019   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.186051   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.186241   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.186425   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.186551   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.186660   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.186853   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.187034   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.187051   65441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-969068' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-969068/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-969068' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:25.313435   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:25.313470   65441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:25.313518   65441 buildroot.go:174] setting up certificates
	I0804 00:15:25.313531   65441 provision.go:84] configureAuth start
	I0804 00:15:25.313544   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.313856   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:25.316883   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.317233   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.317287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.317475   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.319773   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.320180   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.320214   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.320404   65441 provision.go:143] copyHostCerts
	I0804 00:15:25.320459   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:25.320467   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:25.320531   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:25.320666   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:25.320675   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:25.320702   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:25.320769   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:25.320777   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:25.320804   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:25.320871   65441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-969068 san=[127.0.0.1 192.168.39.132 default-k8s-diff-port-969068 localhost minikube]
	I0804 00:15:25.374535   65441 provision.go:177] copyRemoteCerts
	I0804 00:15:25.374590   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:25.374613   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.377629   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.378047   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.378073   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.378254   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.378478   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.378672   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.378897   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:25.469632   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:25.495826   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0804 00:15:25.527006   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:15:25.557603   65441 provision.go:87] duration metric: took 244.055462ms to configureAuth
	I0804 00:15:25.557637   65441 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:25.557873   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:25.557982   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.560974   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.561339   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.561389   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.561570   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.561740   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.561881   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.562043   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.562248   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.562456   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.562471   65441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:25.835452   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:25.835480   65441 machine.go:97] duration metric: took 909.563441ms to provisionDockerMachine
	I0804 00:15:25.835496   65441 start.go:293] postStartSetup for "default-k8s-diff-port-969068" (driver="kvm2")
	I0804 00:15:25.835512   65441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:25.835541   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:25.835846   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:25.835873   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.838713   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.839124   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.839151   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.839287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.839465   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.839634   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.839779   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:25.928376   65441 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:25.932472   65441 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:25.932498   65441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:25.932608   65441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:25.932775   65441 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:25.932951   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:25.943100   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:25.969517   65441 start.go:296] duration metric: took 134.003956ms for postStartSetup
	I0804 00:15:25.969567   65441 fix.go:56] duration metric: took 20.799045329s for fixHost
	I0804 00:15:25.969591   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.972743   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.973172   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.973204   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.973342   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.973596   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.973768   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.973944   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.974158   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.974330   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.974343   65441 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:26.095438   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730526.053053982
	
	I0804 00:15:26.095462   65441 fix.go:216] guest clock: 1722730526.053053982
	I0804 00:15:26.095472   65441 fix.go:229] Guest: 2024-08-04 00:15:26.053053982 +0000 UTC Remote: 2024-08-04 00:15:25.969572309 +0000 UTC m=+213.641216658 (delta=83.481673ms)
	I0804 00:15:26.095524   65441 fix.go:200] guest clock delta is within tolerance: 83.481673ms
	I0804 00:15:26.095534   65441 start.go:83] releasing machines lock for "default-k8s-diff-port-969068", held for 20.925048627s
	I0804 00:15:26.095570   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.095862   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:26.098718   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.099112   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.099145   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.099305   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.099929   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.100108   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.100182   65441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:26.100222   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:26.100347   65441 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:26.100388   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:26.103393   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.103720   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.103942   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.103963   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.104142   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:26.104159   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.104243   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.104347   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:26.104384   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:26.104499   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:26.104545   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:26.104718   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:26.104728   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:26.104881   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:26.214704   65441 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:26.221287   65441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:26.378021   65441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:26.385673   65441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:26.385751   65441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:26.403073   65441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:26.403104   65441 start.go:495] detecting cgroup driver to use...
	I0804 00:15:26.403193   65441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:26.421108   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:26.435556   65441 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:26.435627   65441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:26.455219   65441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:26.477841   65441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:26.626980   65441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:26.806808   65441 docker.go:233] disabling docker service ...
	I0804 00:15:26.806887   65441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:26.824079   65441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:26.839225   65441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:26.967375   65441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:27.136156   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:27.151822   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:27.173326   65441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:15:27.173404   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.184431   65441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:27.184509   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.194890   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.208349   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.222326   65441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:27.237212   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.249571   65441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.274913   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.288929   65441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:27.305789   65441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:27.305863   65441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:27.321708   65441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:27.332129   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:27.482279   65441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:27.638388   65441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:27.638465   65441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:27.644607   65441 start.go:563] Will wait 60s for crictl version
	I0804 00:15:27.644665   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:15:27.648663   65441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:27.691731   65441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:27.691824   65441 ssh_runner.go:195] Run: crio --version
	I0804 00:15:27.731365   65441 ssh_runner.go:195] Run: crio --version
	I0804 00:15:27.767416   65441 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:15:26.121074   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Start
	I0804 00:15:26.121263   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring networks are active...
	I0804 00:15:26.122075   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring network default is active
	I0804 00:15:26.122471   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring network mk-embed-certs-877598 is active
	I0804 00:15:26.122884   64502 main.go:141] libmachine: (embed-certs-877598) Getting domain xml...
	I0804 00:15:26.123684   64502 main.go:141] libmachine: (embed-certs-877598) Creating domain...
	I0804 00:15:27.536026   64502 main.go:141] libmachine: (embed-certs-877598) Waiting to get IP...
	I0804 00:15:27.537165   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:27.537650   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:27.537734   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:27.537654   66522 retry.go:31] will retry after 277.473157ms: waiting for machine to come up
	I0804 00:15:27.817330   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:27.817824   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:27.817858   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:27.817788   66522 retry.go:31] will retry after 322.160841ms: waiting for machine to come up
	I0804 00:15:28.141287   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.141818   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.141855   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.141775   66522 retry.go:31] will retry after 325.833359ms: waiting for machine to come up
	I0804 00:15:28.469440   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.469976   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.470015   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.469933   66522 retry.go:31] will retry after 372.304971ms: waiting for machine to come up
	I0804 00:15:28.843604   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.844376   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.844400   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.844297   66522 retry.go:31] will retry after 607.361674ms: waiting for machine to come up
	I0804 00:15:29.453082   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:29.453557   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:29.453586   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:29.453527   66522 retry.go:31] will retry after 615.002468ms: waiting for machine to come up
	I0804 00:15:30.070598   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:30.071112   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:30.071134   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:30.071079   66522 retry.go:31] will retry after 834.292107ms: waiting for machine to come up
	I0804 00:15:27.116719   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.030589   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:30.030625   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:30.030641   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.091459   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:30.091494   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:30.116633   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.149335   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:30.149394   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:30.617009   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.622086   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:30.622117   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:31.116320   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:31.125065   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:31.125143   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:31.617091   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:31.627142   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0804 00:15:31.636371   65087 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:15:31.636405   65087 api_server.go:131] duration metric: took 5.020400356s to wait for apiserver health ...
	I0804 00:15:31.636414   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:15:31.636420   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:31.638145   65087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:26.996399   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.496810   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.995825   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.496395   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.996561   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.496735   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.996542   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.496406   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.996259   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.496307   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.639553   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:15:31.658269   65087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:15:31.685188   65087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:15:31.703581   65087 system_pods.go:59] 8 kube-system pods found
	I0804 00:15:31.703627   65087 system_pods.go:61] "coredns-6f6b679f8f-9vdxc" [fd645695-cc1d-4394-96b0-832f48e9cf26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:15:31.703638   65087 system_pods.go:61] "etcd-no-preload-118016" [a329ecd7-7574-48f4-a776-7b7c05465f8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:15:31.703649   65087 system_pods.go:61] "kube-apiserver-no-preload-118016" [43d313aa-1844-488d-8925-b744f504323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:15:31.703661   65087 system_pods.go:61] "kube-controller-manager-no-preload-118016" [d56a5461-29d3-47f7-95df-a7fc6b52ca2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:15:31.703669   65087 system_pods.go:61] "kube-proxy-8bcg7" [c2b43118-5216-41bf-9f16-00f11ca1eab5] Running
	I0804 00:15:31.703678   65087 system_pods.go:61] "kube-scheduler-no-preload-118016" [53dc528c-2f00-4ca6-86c6-d02f4533229d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:15:31.703687   65087 system_pods.go:61] "metrics-server-6867b74b74-5xfgz" [c558b60d-3816-406a-addb-96cd42266bd1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:15:31.703698   65087 system_pods.go:61] "storage-provisioner" [1edb442e-272f-4ef7-b3fb-7c46b915c61a] Running
	I0804 00:15:31.703707   65087 system_pods.go:74] duration metric: took 18.49198ms to wait for pod list to return data ...
	I0804 00:15:31.703721   65087 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:15:31.712702   65087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:15:31.712735   65087 node_conditions.go:123] node cpu capacity is 2
	I0804 00:15:31.712748   65087 node_conditions.go:105] duration metric: took 9.019815ms to run NodePressure ...
	I0804 00:15:31.712773   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:27.768972   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:27.772437   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:27.772860   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:27.772903   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:27.773135   65441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:27.777834   65441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:27.792279   65441 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:27.792437   65441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:15:27.792493   65441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:27.833330   65441 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:15:27.833453   65441 ssh_runner.go:195] Run: which lz4
	I0804 00:15:27.837836   65441 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:15:27.842093   65441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:15:27.842128   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:15:29.410529   65441 crio.go:462] duration metric: took 1.572735301s to copy over tarball
	I0804 00:15:29.410610   65441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:15:32.062492   65441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.651848511s)
	I0804 00:15:32.062533   65441 crio.go:469] duration metric: took 2.651972207s to extract the tarball
	I0804 00:15:32.062545   65441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:15:32.100003   65441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:32.144166   65441 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:15:32.144192   65441 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:15:32.144201   65441 kubeadm.go:934] updating node { 192.168.39.132 8444 v1.30.3 crio true true} ...
	I0804 00:15:32.144327   65441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-969068 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:32.144434   65441 ssh_runner.go:195] Run: crio config
	I0804 00:15:32.197593   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:15:32.197618   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:32.197630   65441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:32.197658   65441 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-969068 NodeName:default-k8s-diff-port-969068 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:32.197862   65441 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.132
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-969068"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:32.197937   65441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:15:32.208469   65441 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:32.208551   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:32.218194   65441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0804 00:15:32.237731   65441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:15:32.259599   65441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0804 00:15:32.281113   65441 ssh_runner.go:195] Run: grep 192.168.39.132	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:32.285559   65441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:32.298722   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:30.906612   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:30.907056   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:30.907086   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:30.907012   66522 retry.go:31] will retry after 1.489076061s: waiting for machine to come up
	I0804 00:15:32.397239   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:32.397614   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:32.397642   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:32.397568   66522 retry.go:31] will retry after 1.737097329s: waiting for machine to come up
	I0804 00:15:34.135859   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:34.136363   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:34.136393   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:34.136321   66522 retry.go:31] will retry after 2.154712298s: waiting for machine to come up
	I0804 00:15:31.996780   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.496164   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.996444   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.496838   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.996533   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.496300   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.996772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.495937   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.996834   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.496277   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.982926   65087 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:15:31.989888   65087 kubeadm.go:739] kubelet initialised
	I0804 00:15:31.989926   65087 kubeadm.go:740] duration metric: took 6.968445ms waiting for restarted kubelet to initialise ...
	I0804 00:15:31.989938   65087 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:31.997210   65087 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:34.748142   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:32.432400   65441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:32.450525   65441 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068 for IP: 192.168.39.132
	I0804 00:15:32.450548   65441 certs.go:194] generating shared ca certs ...
	I0804 00:15:32.450571   65441 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:32.450738   65441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:32.450801   65441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:32.450815   65441 certs.go:256] generating profile certs ...
	I0804 00:15:32.450922   65441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.key
	I0804 00:15:32.451000   65441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.key.a17bd5dd
	I0804 00:15:32.451053   65441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.key
	I0804 00:15:32.451199   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:32.451242   65441 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:32.451255   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:32.451279   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:32.451303   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:32.451326   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:32.451365   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:32.451910   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:32.505178   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:32.557546   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:32.596512   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:32.635476   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0804 00:15:32.687156   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:32.716537   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:32.746312   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:15:32.777788   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:32.806730   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:32.835822   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:32.864241   65441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:32.886754   65441 ssh_runner.go:195] Run: openssl version
	I0804 00:15:32.893177   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:32.904847   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.909871   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.909937   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.916357   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:32.927322   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:32.939447   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.944221   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.944275   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.950218   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:32.966506   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:32.981288   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.986761   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.986831   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.993077   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:33.007428   65441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:33.013290   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:33.019997   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:33.026423   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:33.033004   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:33.039205   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:33.045367   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:33.051462   65441 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:33.051546   65441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:33.051605   65441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:33.094354   65441 cri.go:89] found id: ""
	I0804 00:15:33.094433   65441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:33.105416   65441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:33.105439   65441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:33.105480   65441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:33.115838   65441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:33.117466   65441 kubeconfig.go:125] found "default-k8s-diff-port-969068" server: "https://192.168.39.132:8444"
	I0804 00:15:33.120806   65441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:33.130533   65441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.132
	I0804 00:15:33.130567   65441 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:33.130579   65441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:33.130628   65441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:33.178718   65441 cri.go:89] found id: ""
	I0804 00:15:33.178813   65441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:33.199000   65441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:33.212169   65441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:33.212188   65441 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:33.212255   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0804 00:15:33.225192   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:33.225254   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:33.239194   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0804 00:15:33.252402   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:33.252470   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:33.265198   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0804 00:15:33.276564   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:33.276636   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:33.288785   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0804 00:15:33.299848   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:33.299904   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:33.311115   65441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:33.322121   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:33.442578   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.526815   65441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084197731s)
	I0804 00:15:34.526857   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.803105   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.893681   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.978573   65441 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:34.978668   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.479179   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.979520   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.063056   65441 api_server.go:72] duration metric: took 1.084463955s to wait for apiserver process to appear ...
	I0804 00:15:36.063161   65441 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:36.063203   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:36.063755   65441 api_server.go:269] stopped: https://192.168.39.132:8444/healthz: Get "https://192.168.39.132:8444/healthz": dial tcp 192.168.39.132:8444: connect: connection refused
	I0804 00:15:36.563501   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:36.293051   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:36.293675   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:36.293710   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:36.293604   66522 retry.go:31] will retry after 2.826050203s: waiting for machine to come up
	I0804 00:15:39.120961   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:39.121602   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:39.121628   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:39.121554   66522 retry.go:31] will retry after 2.710829438s: waiting for machine to come up
	I0804 00:15:36.996761   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.495885   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.995785   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.496550   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.996645   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.995851   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.496685   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.995896   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:41.495864   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.005216   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:39.505397   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:39.405829   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:39.405895   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:39.405913   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:39.433026   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:39.433063   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:39.563242   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:39.568554   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:39.568591   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:40.064078   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:40.085940   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:40.085978   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:40.564041   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:40.569785   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:40.569812   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:41.063334   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:41.068113   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:41.068135   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:41.563691   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:41.569214   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:41.569248   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:42.063737   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:42.068227   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:42.068260   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:42.563309   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:42.567740   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:42.567775   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:43.063306   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:43.067611   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 200:
	ok
	I0804 00:15:43.073842   65441 api_server.go:141] control plane version: v1.30.3
	I0804 00:15:43.073868   65441 api_server.go:131] duration metric: took 7.010684682s to wait for apiserver health ...
	I0804 00:15:43.073879   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:15:43.073887   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:43.075779   65441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:43.077123   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:15:43.088611   65441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:15:43.109845   65441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:15:43.119204   65441 system_pods.go:59] 8 kube-system pods found
	I0804 00:15:43.119235   65441 system_pods.go:61] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:15:43.119246   65441 system_pods.go:61] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:15:43.119259   65441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:15:43.119269   65441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:15:43.119275   65441 system_pods.go:61] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:15:43.119282   65441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:15:43.119300   65441 system_pods.go:61] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:15:43.119309   65441 system_pods.go:61] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:15:43.119317   65441 system_pods.go:74] duration metric: took 9.453775ms to wait for pod list to return data ...
	I0804 00:15:43.119328   65441 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:15:43.122493   65441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:15:43.122516   65441 node_conditions.go:123] node cpu capacity is 2
	I0804 00:15:43.122528   65441 node_conditions.go:105] duration metric: took 3.191087ms to run NodePressure ...
	I0804 00:15:43.122547   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:43.391258   65441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:15:43.395252   65441 kubeadm.go:739] kubelet initialised
	I0804 00:15:43.395274   65441 kubeadm.go:740] duration metric: took 3.992079ms waiting for restarted kubelet to initialise ...
	I0804 00:15:43.395282   65441 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:43.400173   65441 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.404618   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.404645   65441 pod_ready.go:81] duration metric: took 4.449232ms for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.404665   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.404675   65441 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.409134   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.409165   65441 pod_ready.go:81] duration metric: took 4.471898ms for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.409178   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.409190   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.414342   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.414362   65441 pod_ready.go:81] duration metric: took 5.160435ms for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.414374   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.414383   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.513956   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.513987   65441 pod_ready.go:81] duration metric: took 99.59507ms for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.514003   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.514033   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.913592   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-proxy-zz7fr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.913619   65441 pod_ready.go:81] duration metric: took 399.572927ms for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.913628   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-proxy-zz7fr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.913634   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.313833   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.313864   65441 pod_ready.go:81] duration metric: took 400.220214ms for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:44.313878   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.313886   65441 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.713583   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.713616   65441 pod_ready.go:81] duration metric: took 399.716432ms for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:44.713636   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.713647   65441 pod_ready.go:38] duration metric: took 1.318356042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:44.713666   65441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:15:44.725908   65441 ops.go:34] apiserver oom_adj: -16
	I0804 00:15:44.725935   65441 kubeadm.go:597] duration metric: took 11.620489409s to restartPrimaryControlPlane
	I0804 00:15:44.725947   65441 kubeadm.go:394] duration metric: took 11.674491721s to StartCluster
	I0804 00:15:44.725966   65441 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:44.726046   65441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:15:44.728392   65441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:44.728702   65441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:15:44.728805   65441 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:15:44.728895   65441 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.728942   65441 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.728954   65441 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:15:44.728958   65441 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.728990   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.728967   65441 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.729027   65441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-969068"
	I0804 00:15:44.729039   65441 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.729054   65441 addons.go:243] addon metrics-server should already be in state true
	I0804 00:15:44.729143   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.729436   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729470   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.729515   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729564   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729598   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.729642   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.728909   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:44.730486   65441 out.go:177] * Verifying Kubernetes components...
	I0804 00:15:44.731972   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:44.748737   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0804 00:15:44.749200   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I0804 00:15:44.749311   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I0804 00:15:44.749582   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.749691   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.749858   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.750128   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750144   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750153   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750171   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750326   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750347   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750609   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.750617   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.750810   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.751212   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.751249   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.751286   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.751733   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.751780   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.754574   65441 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.754616   65441 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:15:44.754649   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.755038   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.755080   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.769763   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0804 00:15:44.770311   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.770828   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.770850   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.771209   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.771371   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.771935   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43081
	I0804 00:15:44.773284   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.773416   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I0804 00:15:44.773646   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.773854   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.773866   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.773981   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.774227   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.774529   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.774551   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.774665   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.774711   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.774938   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.775078   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.776166   65441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:15:44.776690   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.777692   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:15:44.777708   65441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:15:44.777724   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.778473   65441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:41.833728   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:41.834246   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:41.834270   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:41.834210   66522 retry.go:31] will retry after 2.891635961s: waiting for machine to come up
	I0804 00:15:44.727424   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.727895   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has current primary IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.727919   64502 main.go:141] libmachine: (embed-certs-877598) Found IP for machine: 192.168.50.140
	I0804 00:15:44.727943   64502 main.go:141] libmachine: (embed-certs-877598) Reserving static IP address...
	I0804 00:15:44.728570   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "embed-certs-877598", mac: "52:54:00:86:aa:38", ip: "192.168.50.140"} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.728602   64502 main.go:141] libmachine: (embed-certs-877598) DBG | skip adding static IP to network mk-embed-certs-877598 - found existing host DHCP lease matching {name: "embed-certs-877598", mac: "52:54:00:86:aa:38", ip: "192.168.50.140"}
	I0804 00:15:44.728617   64502 main.go:141] libmachine: (embed-certs-877598) Reserved static IP address: 192.168.50.140
	I0804 00:15:44.728634   64502 main.go:141] libmachine: (embed-certs-877598) Waiting for SSH to be available...
	I0804 00:15:44.728648   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Getting to WaitForSSH function...
	I0804 00:15:44.731684   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.732102   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.732137   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.732388   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Using SSH client type: external
	I0804 00:15:44.732408   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa (-rw-------)
	I0804 00:15:44.732438   64502 main.go:141] libmachine: (embed-certs-877598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:44.732448   64502 main.go:141] libmachine: (embed-certs-877598) DBG | About to run SSH command:
	I0804 00:15:44.732462   64502 main.go:141] libmachine: (embed-certs-877598) DBG | exit 0
	I0804 00:15:44.873689   64502 main.go:141] libmachine: (embed-certs-877598) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:44.874033   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetConfigRaw
	I0804 00:15:44.874716   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:44.877406   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.877823   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.877855   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.878130   64502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/config.json ...
	I0804 00:15:44.878358   64502 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:44.878382   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:44.878563   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:44.880862   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.881215   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.881253   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.881427   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:44.881597   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:44.881785   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:44.881958   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:44.882150   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:44.882381   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:44.882399   64502 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:44.998143   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:44.998172   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:44.998534   64502 buildroot.go:166] provisioning hostname "embed-certs-877598"
	I0804 00:15:44.998564   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:44.998761   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.001998   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.002508   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.002545   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.002691   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.002847   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.003026   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.003175   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.003388   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.003592   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.003606   64502 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-877598 && echo "embed-certs-877598" | sudo tee /etc/hostname
	I0804 00:15:45.142065   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-877598
	
	I0804 00:15:45.142123   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.145427   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.145858   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.145912   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.146133   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.146279   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.146422   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.146595   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.146778   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.146991   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.147007   64502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-877598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-877598/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-877598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:45.275711   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:45.275748   64502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:45.275775   64502 buildroot.go:174] setting up certificates
	I0804 00:15:45.275790   64502 provision.go:84] configureAuth start
	I0804 00:15:45.275804   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:45.276145   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:45.279645   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.280141   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.280166   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.280298   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.283135   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.283495   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.283521   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.283693   64502 provision.go:143] copyHostCerts
	I0804 00:15:45.283754   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:45.283767   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:45.283837   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:45.283954   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:45.283975   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:45.284004   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:45.284168   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:45.284182   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:45.284214   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:45.284280   64502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.embed-certs-877598 san=[127.0.0.1 192.168.50.140 embed-certs-877598 localhost minikube]
	I0804 00:15:45.484805   64502 provision.go:177] copyRemoteCerts
	I0804 00:15:45.484861   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:45.484883   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.488177   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.488586   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.488621   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.488852   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.489032   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.489191   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.489340   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:45.580782   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:45.612118   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 00:15:45.638201   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:15:45.665741   64502 provision.go:87] duration metric: took 389.935703ms to configureAuth
	I0804 00:15:45.665778   64502 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:45.666008   64502 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:45.666110   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.668942   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.669312   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.669343   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.669589   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.669812   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.669995   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.670158   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.670317   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.670501   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.670522   64502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:44.779708   65441 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:15:44.779730   65441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:15:44.779747   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.780637   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.781098   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.781120   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.781219   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.781424   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.781593   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.781753   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.783024   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.783459   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.783479   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.783895   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.784054   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.784219   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.784343   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.793057   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
	I0804 00:15:44.793581   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.794075   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.794094   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.794413   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.794586   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.796274   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.796609   65441 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:15:44.796623   65441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:15:44.796643   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.799445   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.799990   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.800254   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.800698   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.800864   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.800974   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.801305   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.962413   65441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:44.983596   65441 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-969068" to be "Ready" ...
	I0804 00:15:45.057238   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:15:45.057261   65441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:15:45.082722   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:15:45.082745   65441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:15:45.088213   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:15:45.115230   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:15:45.115261   65441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:15:45.115325   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:15:45.164676   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:15:45.502008   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.502040   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.502381   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:45.502440   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.502463   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:45.502476   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.502484   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.502701   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.502718   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:45.510043   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.510065   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.510305   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:45.510353   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.510364   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.217233   65441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.101870491s)
	I0804 00:15:46.217295   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.217308   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.217585   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.217609   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.217625   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.217652   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.217719   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.218073   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.218091   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.218104   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.255756   65441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.091044347s)
	I0804 00:15:46.255802   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.255819   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.256053   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.256093   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.256101   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.256110   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.256117   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.256412   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.256446   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.256454   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.256465   65441 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-969068"
	I0804 00:15:46.258662   65441 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0804 00:15:41.995808   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.496612   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.996566   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.495812   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.996095   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.495902   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.996724   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.495854   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.996354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:46.496185   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.005235   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:44.003809   65087 pod_ready.go:92] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.003847   65087 pod_ready.go:81] duration metric: took 12.006609818s for pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.003861   65087 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.009518   65087 pod_ready.go:92] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.009541   65087 pod_ready.go:81] duration metric: took 5.671724ms for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.009554   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.014897   65087 pod_ready.go:92] pod "kube-apiserver-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.014923   65087 pod_ready.go:81] duration metric: took 5.360171ms for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.014938   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.521943   65087 pod_ready.go:92] pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.521968   65087 pod_ready.go:81] duration metric: took 1.507021563s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.521983   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bcg7" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.527550   65087 pod_ready.go:92] pod "kube-proxy-8bcg7" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.527575   65087 pod_ready.go:81] duration metric: took 5.585026ms for pod "kube-proxy-8bcg7" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.527588   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.604221   65087 pod_ready.go:92] pod "kube-scheduler-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.604245   65087 pod_ready.go:81] duration metric: took 76.648502ms for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.604260   65087 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:46.260578   65441 addons.go:510] duration metric: took 1.531768603s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0804 00:15:46.988351   65441 node_ready.go:53] node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:45.985471   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:45.985501   64502 machine.go:97] duration metric: took 1.107126695s to provisionDockerMachine
	I0804 00:15:45.985514   64502 start.go:293] postStartSetup for "embed-certs-877598" (driver="kvm2")
	I0804 00:15:45.985527   64502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:45.985554   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:45.985928   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:45.985962   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.989294   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.989699   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.989731   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.989875   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.990079   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.990230   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.990355   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.085684   64502 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:46.091660   64502 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:46.091690   64502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:46.091776   64502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:46.091873   64502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:46.092005   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:46.102373   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:46.129547   64502 start.go:296] duration metric: took 144.018823ms for postStartSetup
	I0804 00:15:46.129594   64502 fix.go:56] duration metric: took 20.033890858s for fixHost
	I0804 00:15:46.129619   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.132803   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.133154   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.133190   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.133347   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.133580   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.133766   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.134016   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.134242   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:46.134454   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:46.134471   64502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:46.250499   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730546.219077490
	
	I0804 00:15:46.250528   64502 fix.go:216] guest clock: 1722730546.219077490
	I0804 00:15:46.250539   64502 fix.go:229] Guest: 2024-08-04 00:15:46.21907749 +0000 UTC Remote: 2024-08-04 00:15:46.129599456 +0000 UTC m=+355.401502879 (delta=89.478034ms)
	I0804 00:15:46.250567   64502 fix.go:200] guest clock delta is within tolerance: 89.478034ms
	I0804 00:15:46.250575   64502 start.go:83] releasing machines lock for "embed-certs-877598", held for 20.15490553s
	I0804 00:15:46.250609   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.250902   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:46.253782   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.254164   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.254194   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.254376   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.254967   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.255169   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.255247   64502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:46.255307   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.255376   64502 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:46.255399   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.260113   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260481   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.260511   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260529   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260702   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.260870   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.260995   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.261022   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.261045   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.261182   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.261208   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.261305   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.261451   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.261588   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.372061   64502 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:46.378356   64502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:46.527705   64502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:46.534567   64502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:46.534620   64502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:46.550801   64502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:46.550829   64502 start.go:495] detecting cgroup driver to use...
	I0804 00:15:46.550916   64502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:46.568369   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:46.583437   64502 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:46.583496   64502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:46.599267   64502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:46.614874   64502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:46.734467   64502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:46.900868   64502 docker.go:233] disabling docker service ...
	I0804 00:15:46.900941   64502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:46.915612   64502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:46.929948   64502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:47.056637   64502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:47.175277   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:47.190167   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:47.211062   64502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:15:47.211115   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.222459   64502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:47.222547   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.232964   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.243663   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.254387   64502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:47.266424   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.277323   64502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.296078   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.307058   64502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:47.317138   64502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:47.317223   64502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:47.332104   64502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:47.342965   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:47.464208   64502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:47.620127   64502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:47.620196   64502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:47.625103   64502 start.go:563] Will wait 60s for crictl version
	I0804 00:15:47.625165   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:15:47.628942   64502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:47.668593   64502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:47.668686   64502 ssh_runner.go:195] Run: crio --version
	I0804 00:15:47.700313   64502 ssh_runner.go:195] Run: crio --version
	I0804 00:15:47.737281   64502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:15:47.738730   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:47.741698   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:47.742098   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:47.742144   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:47.742310   64502 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:47.746883   64502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:47.760111   64502 kubeadm.go:883] updating cluster {Name:embed-certs-877598 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:47.760247   64502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:15:47.760305   64502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:47.801700   64502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:15:47.801766   64502 ssh_runner.go:195] Run: which lz4
	I0804 00:15:47.806337   64502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:15:47.811010   64502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:15:47.811050   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:15:49.359157   64502 crio.go:462] duration metric: took 1.552864688s to copy over tarball
	I0804 00:15:49.359245   64502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:15:46.996215   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.496634   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.996278   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.496184   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.996616   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.496240   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.996433   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.996600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:51.496459   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.611474   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:49.611879   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:51.616732   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:48.988818   65441 node_ready.go:53] node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:49.988196   65441 node_ready.go:49] node "default-k8s-diff-port-969068" has status "Ready":"True"
	I0804 00:15:49.988220   65441 node_ready.go:38] duration metric: took 5.004585481s for node "default-k8s-diff-port-969068" to be "Ready" ...
	I0804 00:15:49.988229   65441 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:49.994536   65441 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:50.001200   65441 pod_ready.go:92] pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:50.001229   65441 pod_ready.go:81] duration metric: took 6.665744ms for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:50.001243   65441 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:52.009436   65441 pod_ready.go:102] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:51.759772   64502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.400487256s)
	I0804 00:15:51.759836   64502 crio.go:469] duration metric: took 2.40064418s to extract the tarball
	I0804 00:15:51.759848   64502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:15:51.800038   64502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:51.845098   64502 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:15:51.845124   64502 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:15:51.845134   64502 kubeadm.go:934] updating node { 192.168.50.140 8443 v1.30.3 crio true true} ...
	I0804 00:15:51.845258   64502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-877598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:51.845339   64502 ssh_runner.go:195] Run: crio config
	I0804 00:15:51.895019   64502 cni.go:84] Creating CNI manager for ""
	I0804 00:15:51.895039   64502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:51.895048   64502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:51.895067   64502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.140 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-877598 NodeName:embed-certs-877598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:51.895202   64502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-877598"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:51.895272   64502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:15:51.906363   64502 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:51.906426   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:51.917727   64502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0804 00:15:51.936370   64502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:15:51.953894   64502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0804 00:15:51.972910   64502 ssh_runner.go:195] Run: grep 192.168.50.140	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:51.977288   64502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:51.990992   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:52.115808   64502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:52.133326   64502 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598 for IP: 192.168.50.140
	I0804 00:15:52.133373   64502 certs.go:194] generating shared ca certs ...
	I0804 00:15:52.133396   64502 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:52.133564   64502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:52.133613   64502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:52.133628   64502 certs.go:256] generating profile certs ...
	I0804 00:15:52.133736   64502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/client.key
	I0804 00:15:52.133824   64502 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.key.5511d337
	I0804 00:15:52.133873   64502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.key
	I0804 00:15:52.134013   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:52.134077   64502 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:52.134091   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:52.134130   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:52.134168   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:52.134200   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:52.134256   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:52.134880   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:52.175985   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:52.209458   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:52.239097   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:52.271037   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0804 00:15:52.317594   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:52.353485   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:52.382159   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:15:52.407478   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:52.433103   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:52.457233   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:52.481534   64502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:52.500482   64502 ssh_runner.go:195] Run: openssl version
	I0804 00:15:52.509021   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:52.522431   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.527125   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.527184   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.533310   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:52.546085   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:52.557781   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.562516   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.562587   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.568403   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:52.580431   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:52.592706   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.597280   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.597382   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.603284   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:52.616100   64502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:52.621422   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:52.631811   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:52.639130   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:52.646159   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:52.652721   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:52.659459   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:52.665864   64502 kubeadm.go:392] StartCluster: {Name:embed-certs-877598 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:52.665991   64502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:52.666044   64502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:52.711272   64502 cri.go:89] found id: ""
	I0804 00:15:52.711346   64502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:52.722294   64502 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:52.722321   64502 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:52.722380   64502 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:52.733148   64502 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:52.734706   64502 kubeconfig.go:125] found "embed-certs-877598" server: "https://192.168.50.140:8443"
	I0804 00:15:52.737995   64502 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:52.749941   64502 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.140
	I0804 00:15:52.749986   64502 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:52.749998   64502 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:52.750043   64502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:52.793295   64502 cri.go:89] found id: ""
	I0804 00:15:52.793388   64502 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:52.811438   64502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:52.824055   64502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:52.824080   64502 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:52.824128   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:15:52.835393   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:52.835446   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:52.846732   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:15:52.856889   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:52.856942   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:52.869951   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:15:52.881836   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:52.881909   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:52.894121   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:15:52.905643   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:52.905711   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:52.917063   64502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:52.929399   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:53.132145   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.096969   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.325640   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.385886   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.472926   64502 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:54.473002   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.973406   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.473410   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.578082   64502 api_server.go:72] duration metric: took 1.105154357s to wait for apiserver process to appear ...
	I0804 00:15:55.578170   64502 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:55.578207   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:55.578847   64502 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I0804 00:15:51.996447   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.496265   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.996030   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.996313   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.495823   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.996360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.496652   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.996049   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:55.996141   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:56.045001   64758 cri.go:89] found id: ""
	I0804 00:15:56.045031   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.045042   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:56.045049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:56.045114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:56.086505   64758 cri.go:89] found id: ""
	I0804 00:15:56.086535   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.086547   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:56.086554   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:56.086618   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:56.125953   64758 cri.go:89] found id: ""
	I0804 00:15:56.125983   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.125994   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:56.126001   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:56.126060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:56.167313   64758 cri.go:89] found id: ""
	I0804 00:15:56.167343   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.167354   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:56.167361   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:56.167424   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:56.211102   64758 cri.go:89] found id: ""
	I0804 00:15:56.211132   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.211142   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:56.211149   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:56.211231   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:56.246894   64758 cri.go:89] found id: ""
	I0804 00:15:56.246926   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.246937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:56.246945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:56.247012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:56.281952   64758 cri.go:89] found id: ""
	I0804 00:15:56.281980   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.281991   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:56.281998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:56.282060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:56.317685   64758 cri.go:89] found id: ""
	I0804 00:15:56.317719   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.317733   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:56.317745   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:56.317762   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:56.335040   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:56.335069   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:56.475995   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:56.476017   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:56.476033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:56.567508   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:56.567551   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:56.618136   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:56.618166   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:54.112928   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:56.112987   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:54.179330   65441 pod_ready.go:102] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:54.789712   65441 pod_ready.go:92] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.789738   65441 pod_ready.go:81] duration metric: took 4.788487591s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.789749   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.799762   65441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.799785   65441 pod_ready.go:81] duration metric: took 10.029856ms for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.799795   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.805685   65441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.805708   65441 pod_ready.go:81] duration metric: took 5.905108ms for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.805718   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.809797   65441 pod_ready.go:92] pod "kube-proxy-zz7fr" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.809818   65441 pod_ready.go:81] duration metric: took 4.094183ms for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.809827   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.820536   65441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.820557   65441 pod_ready.go:81] duration metric: took 10.722903ms for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.820567   65441 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:56.827543   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:56.078916   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:58.738609   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:58.738641   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:58.738657   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:58.772665   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:58.772695   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:59.079121   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:59.083798   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:59.083829   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:59.579242   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:59.585343   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:59.585381   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:00.078877   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:00.099981   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:00.100022   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:00.578505   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:00.582665   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:00.582692   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:59.172886   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:59.187045   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:59.187128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:59.225135   64758 cri.go:89] found id: ""
	I0804 00:15:59.225164   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.225173   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:59.225179   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:59.225255   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:59.262538   64758 cri.go:89] found id: ""
	I0804 00:15:59.262566   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.262573   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:59.262578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:59.262635   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:59.301665   64758 cri.go:89] found id: ""
	I0804 00:15:59.301697   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.301708   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:59.301715   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:59.301778   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:59.362742   64758 cri.go:89] found id: ""
	I0804 00:15:59.362766   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.362774   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:59.362779   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:59.362834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:59.404398   64758 cri.go:89] found id: ""
	I0804 00:15:59.404431   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.404509   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:59.404525   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:59.404594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:59.454257   64758 cri.go:89] found id: ""
	I0804 00:15:59.454285   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.454297   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:59.454305   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:59.454363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:59.496790   64758 cri.go:89] found id: ""
	I0804 00:15:59.496818   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.496829   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:59.496837   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:59.496896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:59.537395   64758 cri.go:89] found id: ""
	I0804 00:15:59.537424   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.537431   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:59.537439   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:59.537453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:59.600005   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:59.600042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:59.617304   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:59.617336   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:59.692828   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:59.692849   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:59.692864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:59.764000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:59.764038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:58.611600   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.110986   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.079326   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:01.083661   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:01.083689   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:01.578711   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:01.583011   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:01.583040   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:02.078606   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:02.083234   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0804 00:16:02.090079   64502 api_server.go:141] control plane version: v1.30.3
	I0804 00:16:02.090112   64502 api_server.go:131] duration metric: took 6.511921332s to wait for apiserver health ...
	I0804 00:16:02.090123   64502 cni.go:84] Creating CNI manager for ""
	I0804 00:16:02.090132   64502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:16:02.092169   64502 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:58.829268   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.327623   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:02.093704   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:16:02.109001   64502 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:16:02.131996   64502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:16:02.145300   64502 system_pods.go:59] 8 kube-system pods found
	I0804 00:16:02.145333   64502 system_pods.go:61] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:16:02.145340   64502 system_pods.go:61] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:16:02.145348   64502 system_pods.go:61] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:16:02.145370   64502 system_pods.go:61] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:16:02.145380   64502 system_pods.go:61] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:16:02.145389   64502 system_pods.go:61] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:16:02.145397   64502 system_pods.go:61] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:16:02.145403   64502 system_pods.go:61] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:16:02.145412   64502 system_pods.go:74] duration metric: took 13.393537ms to wait for pod list to return data ...
	I0804 00:16:02.145425   64502 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:16:02.149623   64502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:16:02.149651   64502 node_conditions.go:123] node cpu capacity is 2
	I0804 00:16:02.149661   64502 node_conditions.go:105] duration metric: took 4.231097ms to run NodePressure ...
	I0804 00:16:02.149677   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:16:02.424261   64502 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:16:02.429537   64502 kubeadm.go:739] kubelet initialised
	I0804 00:16:02.429555   64502 kubeadm.go:740] duration metric: took 5.269005ms waiting for restarted kubelet to initialise ...
	I0804 00:16:02.429563   64502 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:02.435433   64502 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.440580   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.440606   64502 pod_ready.go:81] duration metric: took 5.145511ms for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.440619   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.440628   64502 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.445111   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "etcd-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.445136   64502 pod_ready.go:81] duration metric: took 4.497361ms for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.445148   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "etcd-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.445157   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.450172   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.450200   64502 pod_ready.go:81] duration metric: took 5.032514ms for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.450211   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.450219   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.536314   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.536386   64502 pod_ready.go:81] duration metric: took 86.155481ms for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.536398   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.536409   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.935794   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-proxy-wk8zf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.935830   64502 pod_ready.go:81] duration metric: took 399.405535ms for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.935842   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-proxy-wk8zf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.935861   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:03.335730   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.335760   64502 pod_ready.go:81] duration metric: took 399.889478ms for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:03.335772   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.335780   64502 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:03.735762   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.735786   64502 pod_ready.go:81] duration metric: took 399.996995ms for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:03.735795   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.735802   64502 pod_ready.go:38] duration metric: took 1.306222891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:03.735818   64502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:16:03.748578   64502 ops.go:34] apiserver oom_adj: -16
	I0804 00:16:03.748602   64502 kubeadm.go:597] duration metric: took 11.026274037s to restartPrimaryControlPlane
	I0804 00:16:03.748611   64502 kubeadm.go:394] duration metric: took 11.082760058s to StartCluster
	I0804 00:16:03.748637   64502 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:16:03.748719   64502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:16:03.750554   64502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:16:03.750824   64502 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:16:03.750900   64502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:16:03.750998   64502 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-877598"
	I0804 00:16:03.751041   64502 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-877598"
	W0804 00:16:03.751053   64502 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:16:03.751051   64502 addons.go:69] Setting default-storageclass=true in profile "embed-certs-877598"
	I0804 00:16:03.751072   64502 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:16:03.751108   64502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-877598"
	I0804 00:16:03.751063   64502 addons.go:69] Setting metrics-server=true in profile "embed-certs-877598"
	I0804 00:16:03.751181   64502 addons.go:234] Setting addon metrics-server=true in "embed-certs-877598"
	W0804 00:16:03.751196   64502 addons.go:243] addon metrics-server should already be in state true
	I0804 00:16:03.751245   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.751467   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.751503   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.751540   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.751612   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.751088   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.751990   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.752017   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.752817   64502 out.go:177] * Verifying Kubernetes components...
	I0804 00:16:03.754613   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:16:03.769684   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I0804 00:16:03.769701   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37925
	I0804 00:16:03.769697   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0804 00:16:03.770197   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770332   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770619   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770792   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.770808   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.770935   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.770949   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.771125   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.771327   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.771520   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.771545   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.771555   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.771938   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.772138   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.772195   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.772521   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.772565   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.776267   64502 addons.go:234] Setting addon default-storageclass=true in "embed-certs-877598"
	W0804 00:16:03.776292   64502 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:16:03.776327   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.776695   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.776738   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.789183   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0804 00:16:03.789660   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.789796   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I0804 00:16:03.790184   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.790202   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.790246   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.790608   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.790869   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.790900   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.790985   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.791276   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.791519   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.793005   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.793338   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.795747   64502 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:16:03.795748   64502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:16:03.796208   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0804 00:16:03.796652   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.797194   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.797220   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.797589   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:16:03.797611   64502 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:16:03.797632   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.797640   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.797673   64502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:16:03.797684   64502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:16:03.797697   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.798266   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.798311   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.801933   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802083   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802417   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.802445   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802589   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.802766   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.802851   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.802868   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802936   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.803140   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.803166   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.803310   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.803409   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.803512   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.818073   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0804 00:16:03.818647   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.819107   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.819130   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.819488   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.819721   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.821982   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.822216   64502 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:16:03.822232   64502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:16:03.822251   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.825593   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.826055   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.826090   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.826356   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.826526   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.826667   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.826829   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.955019   64502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:16:03.976453   64502 node_ready.go:35] waiting up to 6m0s for node "embed-certs-877598" to be "Ready" ...
	I0804 00:16:04.051717   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:16:04.074720   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:16:04.074740   64502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:16:04.099578   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:16:04.099606   64502 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:16:04.118348   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:16:04.163390   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:16:04.163418   64502 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:16:04.227379   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:16:05.143364   64502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091613097s)
	I0804 00:16:05.143418   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143419   64502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.025041953s)
	I0804 00:16:05.143430   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143439   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143449   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143726   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.143743   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.143755   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143764   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143862   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.143893   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.143915   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.143935   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143964   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.144014   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.144033   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.144085   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.144259   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.144305   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.144319   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.150739   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.150761   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.151073   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.151102   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.151130   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.169806   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.169832   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.170103   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.170122   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.170148   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.170159   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.170171   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.170461   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.170546   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.170563   64502 addons.go:475] Verifying addon metrics-server=true in "embed-certs-877598"
	I0804 00:16:05.172575   64502 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0804 00:16:05.173964   64502 addons.go:510] duration metric: took 1.423065893s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0804 00:16:02.307325   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:02.324168   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:02.324233   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:02.370204   64758 cri.go:89] found id: ""
	I0804 00:16:02.370234   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.370250   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:02.370258   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:02.370325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:02.405586   64758 cri.go:89] found id: ""
	I0804 00:16:02.405616   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.405628   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:02.405636   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:02.405694   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:02.445644   64758 cri.go:89] found id: ""
	I0804 00:16:02.445665   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.445675   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:02.445682   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:02.445739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:02.483659   64758 cri.go:89] found id: ""
	I0804 00:16:02.483686   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.483695   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:02.483701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:02.483751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:02.519903   64758 cri.go:89] found id: ""
	I0804 00:16:02.519929   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.519938   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:02.519944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:02.519991   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:02.557373   64758 cri.go:89] found id: ""
	I0804 00:16:02.557401   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.557410   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:02.557416   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:02.557472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:02.594203   64758 cri.go:89] found id: ""
	I0804 00:16:02.594238   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.594249   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:02.594256   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:02.594316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:02.635487   64758 cri.go:89] found id: ""
	I0804 00:16:02.635512   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.635520   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:02.635529   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:02.635543   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:02.686990   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:02.687035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:02.701784   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:02.701810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:02.778626   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:02.778648   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:02.778662   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:02.856056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:02.856097   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:05.402858   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:05.418825   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:05.418900   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:05.458789   64758 cri.go:89] found id: ""
	I0804 00:16:05.458872   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.458887   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:05.458895   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:05.458967   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:05.498258   64758 cri.go:89] found id: ""
	I0804 00:16:05.498284   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.498295   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:05.498302   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:05.498364   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:05.540892   64758 cri.go:89] found id: ""
	I0804 00:16:05.540919   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.540927   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:05.540933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:05.540992   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:05.578876   64758 cri.go:89] found id: ""
	I0804 00:16:05.578911   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.578919   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:05.578924   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:05.578971   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:05.616248   64758 cri.go:89] found id: ""
	I0804 00:16:05.616272   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.616280   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:05.616285   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:05.616339   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:05.654387   64758 cri.go:89] found id: ""
	I0804 00:16:05.654419   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.654428   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:05.654436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:05.654528   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:05.695579   64758 cri.go:89] found id: ""
	I0804 00:16:05.695613   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.695625   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:05.695669   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:05.695752   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:05.740754   64758 cri.go:89] found id: ""
	I0804 00:16:05.740777   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.740785   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:05.740793   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:05.740805   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:05.792091   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:05.792126   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:05.809130   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:05.809164   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:05.888441   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:05.888465   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:05.888479   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:05.969336   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:05.969390   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:03.111834   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:05.613749   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:03.830570   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:06.328076   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:05.980692   64502 node_ready.go:53] node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:08.480205   64502 node_ready.go:53] node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:09.480127   64502 node_ready.go:49] node "embed-certs-877598" has status "Ready":"True"
	I0804 00:16:09.480147   64502 node_ready.go:38] duration metric: took 5.503660587s for node "embed-certs-877598" to be "Ready" ...
	I0804 00:16:09.480155   64502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:09.485704   64502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:09.491316   64502 pod_ready.go:92] pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:09.491340   64502 pod_ready.go:81] duration metric: took 5.611918ms for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:09.491348   64502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:08.514981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:08.531117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:08.531188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:08.569167   64758 cri.go:89] found id: ""
	I0804 00:16:08.569199   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.569210   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:08.569218   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:08.569282   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:08.608478   64758 cri.go:89] found id: ""
	I0804 00:16:08.608559   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.608572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:08.608580   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:08.608636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:08.645939   64758 cri.go:89] found id: ""
	I0804 00:16:08.645972   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.645983   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:08.645990   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:08.646050   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:08.685274   64758 cri.go:89] found id: ""
	I0804 00:16:08.685305   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.685316   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:08.685324   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:08.685400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:08.722314   64758 cri.go:89] found id: ""
	I0804 00:16:08.722345   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.722357   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:08.722363   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:08.722427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:08.758577   64758 cri.go:89] found id: ""
	I0804 00:16:08.758606   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.758617   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:08.758624   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:08.758685   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.798734   64758 cri.go:89] found id: ""
	I0804 00:16:08.798761   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.798773   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:08.798781   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:08.798842   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:08.837577   64758 cri.go:89] found id: ""
	I0804 00:16:08.837600   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.837608   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:08.837616   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:08.837627   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:08.894426   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:08.894465   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:08.909851   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:08.909879   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:08.989858   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:08.989878   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:08.989893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:09.081056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:09.081098   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:11.627914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:11.641805   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:11.641896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:11.679002   64758 cri.go:89] found id: ""
	I0804 00:16:11.679028   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.679036   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:11.679042   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:11.679090   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:11.720188   64758 cri.go:89] found id: ""
	I0804 00:16:11.720220   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.720236   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:11.720245   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:11.720307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:11.760085   64758 cri.go:89] found id: ""
	I0804 00:16:11.760118   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.760130   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:11.760138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:11.760198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:11.796220   64758 cri.go:89] found id: ""
	I0804 00:16:11.796249   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.796266   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:11.796274   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:11.796335   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:11.834216   64758 cri.go:89] found id: ""
	I0804 00:16:11.834243   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.834253   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:11.834260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:11.834336   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:11.869205   64758 cri.go:89] found id: ""
	I0804 00:16:11.869230   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.869237   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:11.869243   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:11.869301   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.110499   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:10.618011   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:08.827284   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:10.828942   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:11.498264   64502 pod_ready.go:102] pod "etcd-embed-certs-877598" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:12.498916   64502 pod_ready.go:92] pod "etcd-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:12.498949   64502 pod_ready.go:81] duration metric: took 3.007593153s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:12.498961   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.562862   64502 pod_ready.go:92] pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.562896   64502 pod_ready.go:81] duration metric: took 2.063926324s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.562910   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.573628   64502 pod_ready.go:92] pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.573655   64502 pod_ready.go:81] duration metric: took 10.735916ms for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.573670   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.583241   64502 pod_ready.go:92] pod "kube-proxy-wk8zf" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.583266   64502 pod_ready.go:81] duration metric: took 9.588875ms for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.583278   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.593419   64502 pod_ready.go:92] pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.593445   64502 pod_ready.go:81] duration metric: took 10.158665ms for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.593457   64502 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:11.912091   64758 cri.go:89] found id: ""
	I0804 00:16:11.912120   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.912132   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:11.912145   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:11.912203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:11.949570   64758 cri.go:89] found id: ""
	I0804 00:16:11.949603   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.949614   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:11.949625   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:11.949643   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:12.006542   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:12.006575   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:12.022435   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:12.022474   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:12.101007   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:12.101032   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:12.101057   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:12.183836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:12.183876   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:14.725345   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:14.738389   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:14.738464   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:14.780103   64758 cri.go:89] found id: ""
	I0804 00:16:14.780133   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.780142   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:14.780147   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:14.780197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:14.817811   64758 cri.go:89] found id: ""
	I0804 00:16:14.817847   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.817863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:14.817872   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:14.817946   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:14.854450   64758 cri.go:89] found id: ""
	I0804 00:16:14.854478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.854488   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:14.854495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:14.854561   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:14.891862   64758 cri.go:89] found id: ""
	I0804 00:16:14.891891   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.891900   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:14.891905   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:14.891958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:14.928450   64758 cri.go:89] found id: ""
	I0804 00:16:14.928478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.928488   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:14.928495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:14.928554   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:14.965820   64758 cri.go:89] found id: ""
	I0804 00:16:14.965848   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.965860   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:14.965867   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:14.965945   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:15.008725   64758 cri.go:89] found id: ""
	I0804 00:16:15.008874   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.008888   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:15.008897   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:15.008957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:15.044618   64758 cri.go:89] found id: ""
	I0804 00:16:15.044768   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.044792   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:15.044802   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:15.044815   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:15.102786   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:15.102825   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:15.118305   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:15.118347   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:15.196397   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:15.196420   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:15.196435   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:15.277941   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:15.277986   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:13.110969   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:15.112546   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:13.327840   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:15.826447   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:16.600315   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.099064   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:17.819354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:17.834271   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:17.834332   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:17.870930   64758 cri.go:89] found id: ""
	I0804 00:16:17.870961   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.870973   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:17.870980   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:17.871040   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:17.907980   64758 cri.go:89] found id: ""
	I0804 00:16:17.908007   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.908016   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:17.908021   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:17.908067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:17.943257   64758 cri.go:89] found id: ""
	I0804 00:16:17.943284   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.943295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:17.943301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:17.943363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:17.982297   64758 cri.go:89] found id: ""
	I0804 00:16:17.982328   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.982338   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:17.982345   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:17.982405   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:18.022780   64758 cri.go:89] found id: ""
	I0804 00:16:18.022810   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.022841   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:18.022850   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:18.022913   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:18.061891   64758 cri.go:89] found id: ""
	I0804 00:16:18.061926   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.061937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:18.061945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:18.062012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:18.100807   64758 cri.go:89] found id: ""
	I0804 00:16:18.100845   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.100855   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:18.100862   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:18.100917   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:18.142011   64758 cri.go:89] found id: ""
	I0804 00:16:18.142044   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.142056   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:18.142066   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:18.142090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:18.195476   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:18.195511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:18.209661   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:18.209690   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:18.282638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:18.282657   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:18.282669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:18.363900   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:18.363938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:20.908753   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:20.922878   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:20.922962   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:20.961013   64758 cri.go:89] found id: ""
	I0804 00:16:20.961041   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.961052   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:20.961058   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:20.961109   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:20.998027   64758 cri.go:89] found id: ""
	I0804 00:16:20.998059   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.998068   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:20.998074   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:20.998121   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:21.035640   64758 cri.go:89] found id: ""
	I0804 00:16:21.035669   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.035680   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:21.035688   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:21.035751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:21.075737   64758 cri.go:89] found id: ""
	I0804 00:16:21.075770   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.075779   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:21.075786   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:21.075846   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:21.120024   64758 cri.go:89] found id: ""
	I0804 00:16:21.120046   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.120054   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:21.120061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:21.120126   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:21.160796   64758 cri.go:89] found id: ""
	I0804 00:16:21.160821   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.160840   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:21.160847   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:21.160907   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:21.195519   64758 cri.go:89] found id: ""
	I0804 00:16:21.195547   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.195558   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:21.195566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:21.195629   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:21.236193   64758 cri.go:89] found id: ""
	I0804 00:16:21.236222   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.236232   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:21.236243   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:21.236258   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:21.295154   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:21.295198   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:21.309540   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:21.309566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:21.389391   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:21.389416   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:21.389433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:21.472771   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:21.472808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:17.611366   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.612092   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:17.827036   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.827655   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:21.828026   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:21.101899   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:23.601687   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.018923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:24.032954   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:24.033013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:24.073677   64758 cri.go:89] found id: ""
	I0804 00:16:24.073703   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.073711   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:24.073716   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:24.073777   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:24.115752   64758 cri.go:89] found id: ""
	I0804 00:16:24.115775   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.115785   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:24.115792   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:24.115849   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:24.152967   64758 cri.go:89] found id: ""
	I0804 00:16:24.153001   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.153017   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:24.153024   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:24.153098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:24.190557   64758 cri.go:89] found id: ""
	I0804 00:16:24.190581   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.190589   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:24.190595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:24.190643   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:24.229312   64758 cri.go:89] found id: ""
	I0804 00:16:24.229341   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.229351   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:24.229373   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:24.229437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:24.265076   64758 cri.go:89] found id: ""
	I0804 00:16:24.265100   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.265107   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:24.265113   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:24.265167   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:24.306508   64758 cri.go:89] found id: ""
	I0804 00:16:24.306534   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.306542   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:24.306547   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:24.306598   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:24.350714   64758 cri.go:89] found id: ""
	I0804 00:16:24.350747   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.350759   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:24.350770   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:24.350785   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:24.366188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:24.366216   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:24.438410   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:24.438431   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:24.438447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:24.522635   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:24.522669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:24.562647   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:24.562678   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:22.110420   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.111399   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.613839   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.327982   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.826914   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.099435   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:28.099896   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:30.100659   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:27.119437   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:27.133330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:27.133426   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:27.170001   64758 cri.go:89] found id: ""
	I0804 00:16:27.170039   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.170048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:27.170054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:27.170112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:27.205811   64758 cri.go:89] found id: ""
	I0804 00:16:27.205843   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.205854   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:27.205861   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:27.205922   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:27.247249   64758 cri.go:89] found id: ""
	I0804 00:16:27.247278   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.247287   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:27.247294   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:27.247360   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:27.285659   64758 cri.go:89] found id: ""
	I0804 00:16:27.285688   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.285697   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:27.285703   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:27.285774   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:27.321039   64758 cri.go:89] found id: ""
	I0804 00:16:27.321066   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.321075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:27.321084   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:27.321130   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:27.359947   64758 cri.go:89] found id: ""
	I0804 00:16:27.359977   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.359988   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:27.359996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:27.360056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:27.401408   64758 cri.go:89] found id: ""
	I0804 00:16:27.401432   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.401440   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:27.401449   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:27.401495   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:27.437297   64758 cri.go:89] found id: ""
	I0804 00:16:27.437326   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.437337   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:27.437347   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:27.437373   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:27.490594   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:27.490639   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:27.505993   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:27.506021   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:27.588779   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:27.588804   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:27.588820   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:27.681557   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:27.681592   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.225062   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:30.239475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:30.239540   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:30.283896   64758 cri.go:89] found id: ""
	I0804 00:16:30.283923   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.283931   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:30.283938   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:30.284013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:30.321506   64758 cri.go:89] found id: ""
	I0804 00:16:30.321532   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.321539   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:30.321545   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:30.321593   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:30.358314   64758 cri.go:89] found id: ""
	I0804 00:16:30.358340   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.358347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:30.358353   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:30.358400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:30.393561   64758 cri.go:89] found id: ""
	I0804 00:16:30.393587   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.393595   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:30.393600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:30.393646   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:30.429907   64758 cri.go:89] found id: ""
	I0804 00:16:30.429935   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.429943   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:30.429949   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:30.430008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:30.466305   64758 cri.go:89] found id: ""
	I0804 00:16:30.466332   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.466342   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:30.466350   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:30.466408   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:30.505384   64758 cri.go:89] found id: ""
	I0804 00:16:30.505413   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.505424   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:30.505431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:30.505492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:30.541756   64758 cri.go:89] found id: ""
	I0804 00:16:30.541786   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.541796   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:30.541806   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:30.541821   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:30.555516   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:30.555554   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:30.627442   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:30.627463   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:30.627473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:30.701452   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:30.701489   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.743436   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:30.743473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:29.111149   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:31.111470   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:29.327268   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:31.328424   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:32.605884   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:34.608119   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:33.298898   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:33.315211   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:33.315292   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:33.353171   64758 cri.go:89] found id: ""
	I0804 00:16:33.353207   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.353220   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:33.353229   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:33.353297   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:33.389767   64758 cri.go:89] found id: ""
	I0804 00:16:33.389792   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.389799   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:33.389805   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:33.389851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:33.446889   64758 cri.go:89] found id: ""
	I0804 00:16:33.446928   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.446939   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:33.446946   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:33.447004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:33.487340   64758 cri.go:89] found id: ""
	I0804 00:16:33.487362   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.487370   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:33.487376   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:33.487423   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:33.530398   64758 cri.go:89] found id: ""
	I0804 00:16:33.530421   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.530429   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:33.530435   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:33.530483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:33.568725   64758 cri.go:89] found id: ""
	I0804 00:16:33.568753   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.568762   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:33.568769   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:33.568818   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:33.607205   64758 cri.go:89] found id: ""
	I0804 00:16:33.607232   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.607242   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:33.607249   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:33.607311   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:33.648188   64758 cri.go:89] found id: ""
	I0804 00:16:33.648220   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.648230   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:33.648240   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:33.648256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:33.700231   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:33.700266   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:33.714899   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:33.714932   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:33.794306   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:33.794326   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:33.794340   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.872446   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:33.872482   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.415000   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:36.428920   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:36.428996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:36.464784   64758 cri.go:89] found id: ""
	I0804 00:16:36.464810   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.464817   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:36.464823   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:36.464925   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:36.501394   64758 cri.go:89] found id: ""
	I0804 00:16:36.501423   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.501431   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:36.501437   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:36.501497   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:36.537049   64758 cri.go:89] found id: ""
	I0804 00:16:36.537079   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.537090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:36.537102   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:36.537173   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:36.573956   64758 cri.go:89] found id: ""
	I0804 00:16:36.573986   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.573997   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:36.574004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:36.574065   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:36.612996   64758 cri.go:89] found id: ""
	I0804 00:16:36.613016   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.613023   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:36.613029   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:36.613083   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:36.652346   64758 cri.go:89] found id: ""
	I0804 00:16:36.652367   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.652374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:36.652380   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:36.652437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:36.690073   64758 cri.go:89] found id: ""
	I0804 00:16:36.690100   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.690110   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:36.690119   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:36.690182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:36.732436   64758 cri.go:89] found id: ""
	I0804 00:16:36.732466   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.732477   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:36.732487   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:36.732505   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:36.746036   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:36.746060   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:36.818141   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:36.818164   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:36.818179   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.611181   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:35.611691   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:33.329719   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:35.330172   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:37.100705   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:39.603600   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:36.907689   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:36.907732   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.947104   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:36.947135   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.502960   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:39.516340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:39.516414   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:39.555903   64758 cri.go:89] found id: ""
	I0804 00:16:39.555929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.555939   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:39.555946   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:39.556004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:39.599791   64758 cri.go:89] found id: ""
	I0804 00:16:39.599816   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.599827   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:39.599834   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:39.599894   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:39.642903   64758 cri.go:89] found id: ""
	I0804 00:16:39.642929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.642936   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:39.642944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:39.643004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:39.678667   64758 cri.go:89] found id: ""
	I0804 00:16:39.678693   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.678702   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:39.678709   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:39.678757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:39.716888   64758 cri.go:89] found id: ""
	I0804 00:16:39.716916   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.716926   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:39.716933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:39.717001   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:39.751576   64758 cri.go:89] found id: ""
	I0804 00:16:39.751602   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.751610   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:39.751616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:39.751664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:39.794026   64758 cri.go:89] found id: ""
	I0804 00:16:39.794056   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.794067   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:39.794087   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:39.794158   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:39.841426   64758 cri.go:89] found id: ""
	I0804 00:16:39.841454   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.841464   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:39.841474   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:39.841492   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.902579   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:39.902616   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:39.924467   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:39.924495   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:40.001318   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:40.001345   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:40.001377   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:40.081520   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:40.081552   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:38.111443   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:40.610810   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:37.827851   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:39.828752   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.327716   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.100037   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:44.100850   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.623094   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:42.636523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:42.636594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:42.674188   64758 cri.go:89] found id: ""
	I0804 00:16:42.674218   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.674226   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:42.674231   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:42.674277   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:42.708496   64758 cri.go:89] found id: ""
	I0804 00:16:42.708522   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.708532   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:42.708539   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:42.708601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:42.751050   64758 cri.go:89] found id: ""
	I0804 00:16:42.751087   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.751100   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:42.751107   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:42.751170   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:42.788520   64758 cri.go:89] found id: ""
	I0804 00:16:42.788546   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.788555   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:42.788560   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:42.788619   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:42.828273   64758 cri.go:89] found id: ""
	I0804 00:16:42.828297   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.828304   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:42.828309   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:42.828356   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:42.867754   64758 cri.go:89] found id: ""
	I0804 00:16:42.867784   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.867799   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:42.867807   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:42.867864   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:42.903945   64758 cri.go:89] found id: ""
	I0804 00:16:42.903977   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.903988   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:42.903996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:42.904059   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:42.942477   64758 cri.go:89] found id: ""
	I0804 00:16:42.942518   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.942539   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:42.942549   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:42.942565   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:42.981776   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:42.981810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:43.037601   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:43.037634   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:43.052719   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:43.052746   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:43.122664   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:43.122688   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:43.122702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:45.701275   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:45.714532   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:45.714607   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:45.750932   64758 cri.go:89] found id: ""
	I0804 00:16:45.750955   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.750986   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:45.750991   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:45.751042   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:45.787348   64758 cri.go:89] found id: ""
	I0804 00:16:45.787373   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.787381   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:45.787387   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:45.787441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:45.823390   64758 cri.go:89] found id: ""
	I0804 00:16:45.823419   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.823429   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:45.823436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:45.823498   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:45.861400   64758 cri.go:89] found id: ""
	I0804 00:16:45.861430   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.861440   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:45.861448   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:45.861508   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:45.898992   64758 cri.go:89] found id: ""
	I0804 00:16:45.899024   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.899036   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:45.899043   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:45.899110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:45.934542   64758 cri.go:89] found id: ""
	I0804 00:16:45.934570   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.934582   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:45.934589   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:45.934648   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:45.967908   64758 cri.go:89] found id: ""
	I0804 00:16:45.967938   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.967949   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:45.967957   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:45.968018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:46.006475   64758 cri.go:89] found id: ""
	I0804 00:16:46.006504   64758 logs.go:276] 0 containers: []
	W0804 00:16:46.006516   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:46.006526   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:46.006541   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:46.058760   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:46.058793   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:46.074753   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:46.074777   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:46.149634   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:46.149655   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:46.149671   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:46.230104   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:46.230140   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:43.111492   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:45.611224   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:44.827683   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:47.326999   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:46.600307   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:49.100532   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:48.772224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:48.785848   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:48.785935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.825206   64758 cri.go:89] found id: ""
	I0804 00:16:48.825232   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.825242   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:48.825249   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:48.825315   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:48.861559   64758 cri.go:89] found id: ""
	I0804 00:16:48.861588   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.861599   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:48.861607   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:48.861675   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:48.903375   64758 cri.go:89] found id: ""
	I0804 00:16:48.903401   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.903412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:48.903419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:48.903480   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:48.940708   64758 cri.go:89] found id: ""
	I0804 00:16:48.940736   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.940748   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:48.940755   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:48.940817   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:48.976190   64758 cri.go:89] found id: ""
	I0804 00:16:48.976218   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.976228   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:48.976236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:48.976291   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:49.010393   64758 cri.go:89] found id: ""
	I0804 00:16:49.010423   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.010434   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:49.010442   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:49.010506   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:49.046670   64758 cri.go:89] found id: ""
	I0804 00:16:49.046698   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.046707   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:49.046711   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:49.046759   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:49.085254   64758 cri.go:89] found id: ""
	I0804 00:16:49.085284   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.085293   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:49.085302   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:49.085314   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:49.142402   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:49.142433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:49.157063   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:49.157092   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:49.233808   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:49.233829   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:49.233841   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:49.320355   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:49.320395   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:51.862548   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:51.875679   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:51.875750   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.110954   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:50.111867   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:49.327109   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.327920   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.600258   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:53.601052   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.911400   64758 cri.go:89] found id: ""
	I0804 00:16:51.911427   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.911437   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:51.911444   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:51.911505   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:51.948825   64758 cri.go:89] found id: ""
	I0804 00:16:51.948853   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.948863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:51.948870   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:51.948935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:51.989458   64758 cri.go:89] found id: ""
	I0804 00:16:51.989488   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.989499   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:51.989506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:51.989568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:52.026663   64758 cri.go:89] found id: ""
	I0804 00:16:52.026685   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.026693   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:52.026698   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:52.026754   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:52.066089   64758 cri.go:89] found id: ""
	I0804 00:16:52.066115   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.066127   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:52.066135   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:52.066198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:52.102159   64758 cri.go:89] found id: ""
	I0804 00:16:52.102185   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.102196   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:52.102203   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:52.102258   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:52.144239   64758 cri.go:89] found id: ""
	I0804 00:16:52.144266   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.144276   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:52.144283   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:52.144344   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:52.180679   64758 cri.go:89] found id: ""
	I0804 00:16:52.180708   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.180717   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:52.180725   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:52.180738   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:52.262074   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:52.262116   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.305913   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:52.305948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:52.357044   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:52.357081   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:52.372090   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:52.372119   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:52.444148   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:54.944910   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:54.958182   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:54.958239   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:54.993629   64758 cri.go:89] found id: ""
	I0804 00:16:54.993657   64758 logs.go:276] 0 containers: []
	W0804 00:16:54.993668   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:54.993675   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:54.993734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:55.029270   64758 cri.go:89] found id: ""
	I0804 00:16:55.029299   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.029310   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:55.029317   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:55.029393   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:55.067923   64758 cri.go:89] found id: ""
	I0804 00:16:55.067951   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.067961   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:55.067968   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:55.068027   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:55.107533   64758 cri.go:89] found id: ""
	I0804 00:16:55.107556   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.107565   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:55.107572   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:55.107633   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:55.143828   64758 cri.go:89] found id: ""
	I0804 00:16:55.143856   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.143868   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:55.143875   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:55.143940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:55.177960   64758 cri.go:89] found id: ""
	I0804 00:16:55.178015   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.178030   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:55.178038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:55.178112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:55.217457   64758 cri.go:89] found id: ""
	I0804 00:16:55.217481   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.217488   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:55.217494   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:55.217538   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:55.259862   64758 cri.go:89] found id: ""
	I0804 00:16:55.259890   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.259898   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:55.259907   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:55.259918   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:55.311566   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:55.311598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:55.327833   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:55.327866   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:55.406475   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:55.406495   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:55.406511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:55.484586   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:55.484618   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.610982   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:54.611276   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:56.611515   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:53.827394   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:55.827945   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:56.099238   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.100223   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:00.599870   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.028251   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:58.042169   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:58.042236   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:58.076836   64758 cri.go:89] found id: ""
	I0804 00:16:58.076859   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.076868   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:58.076873   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:58.076937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:58.115989   64758 cri.go:89] found id: ""
	I0804 00:16:58.116019   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.116031   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:58.116037   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:58.116099   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:58.155049   64758 cri.go:89] found id: ""
	I0804 00:16:58.155079   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.155090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:58.155097   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:58.155160   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:58.190257   64758 cri.go:89] found id: ""
	I0804 00:16:58.190293   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.190305   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:58.190315   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:58.190370   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:58.225001   64758 cri.go:89] found id: ""
	I0804 00:16:58.225029   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.225038   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:58.225061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:58.225118   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:58.268881   64758 cri.go:89] found id: ""
	I0804 00:16:58.268925   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.268937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:58.268945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:58.269010   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:58.305223   64758 cri.go:89] found id: ""
	I0804 00:16:58.305253   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.305269   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:58.305277   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:58.305340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:58.340517   64758 cri.go:89] found id: ""
	I0804 00:16:58.340548   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.340559   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:58.340570   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:58.340584   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:58.355372   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:58.355403   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:58.426292   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:58.426312   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:58.426326   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:58.509990   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:58.510034   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.550957   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:58.550988   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.104806   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:01.119379   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:01.119453   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:01.158376   64758 cri.go:89] found id: ""
	I0804 00:17:01.158407   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.158419   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:01.158426   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:01.158484   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:01.193826   64758 cri.go:89] found id: ""
	I0804 00:17:01.193858   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.193869   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:01.193876   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:01.193937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:01.228566   64758 cri.go:89] found id: ""
	I0804 00:17:01.228588   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.228600   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:01.228607   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:01.228667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:01.265736   64758 cri.go:89] found id: ""
	I0804 00:17:01.265762   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.265772   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:01.265778   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:01.265834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:01.302655   64758 cri.go:89] found id: ""
	I0804 00:17:01.302679   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.302694   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:01.302699   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:01.302753   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:01.340191   64758 cri.go:89] found id: ""
	I0804 00:17:01.340218   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.340226   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:01.340236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:01.340294   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:01.375767   64758 cri.go:89] found id: ""
	I0804 00:17:01.375789   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.375797   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:01.375802   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:01.375875   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:01.412446   64758 cri.go:89] found id: ""
	I0804 00:17:01.412479   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.412490   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:01.412502   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:01.412518   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.466271   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:01.466309   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:01.480800   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:01.480838   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:01.547909   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:01.547932   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:01.547948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:01.628318   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:01.628351   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.611854   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:01.111626   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.326831   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:00.327154   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:02.328038   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:02.601960   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:05.099489   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:04.175883   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:04.189038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:04.189098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:04.229126   64758 cri.go:89] found id: ""
	I0804 00:17:04.229158   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.229167   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:04.229174   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:04.229235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:04.264107   64758 cri.go:89] found id: ""
	I0804 00:17:04.264134   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.264142   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:04.264147   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:04.264203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:04.299959   64758 cri.go:89] found id: ""
	I0804 00:17:04.299996   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.300004   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:04.300010   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:04.300056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:04.337978   64758 cri.go:89] found id: ""
	I0804 00:17:04.338006   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.338016   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:04.338023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:04.338081   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:04.377969   64758 cri.go:89] found id: ""
	I0804 00:17:04.377993   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.378001   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:04.378006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:04.378068   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:04.413036   64758 cri.go:89] found id: ""
	I0804 00:17:04.413062   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.413071   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:04.413078   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:04.413140   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:04.450387   64758 cri.go:89] found id: ""
	I0804 00:17:04.450417   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.450426   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:04.450431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:04.450488   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:04.490132   64758 cri.go:89] found id: ""
	I0804 00:17:04.490165   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.490177   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:04.490188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:04.490204   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:04.560633   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:04.560653   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:04.560668   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:04.639409   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:04.639445   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:04.682479   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:04.682512   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:04.734823   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:04.734857   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:03.112357   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:05.610907   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:04.828050   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.327249   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.099893   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:09.100093   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.250174   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:07.263523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:07.263599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:07.300095   64758 cri.go:89] found id: ""
	I0804 00:17:07.300124   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.300136   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:07.300144   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:07.300211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:07.337798   64758 cri.go:89] found id: ""
	I0804 00:17:07.337824   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.337846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:07.337851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:07.337902   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:07.375305   64758 cri.go:89] found id: ""
	I0804 00:17:07.375337   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.375348   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:07.375356   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:07.375406   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:07.411603   64758 cri.go:89] found id: ""
	I0804 00:17:07.411629   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.411639   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:07.411646   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:07.411704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:07.450478   64758 cri.go:89] found id: ""
	I0804 00:17:07.450502   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.450511   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:07.450518   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:07.450564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:07.489972   64758 cri.go:89] found id: ""
	I0804 00:17:07.489997   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.490006   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:07.490012   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:07.490073   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:07.523685   64758 cri.go:89] found id: ""
	I0804 00:17:07.523713   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.523725   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:07.523732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:07.523789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:07.562636   64758 cri.go:89] found id: ""
	I0804 00:17:07.562665   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.562675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:07.562686   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:07.562702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:07.647968   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:07.648004   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.689829   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:07.689856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:07.738333   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:07.738366   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:07.753419   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:07.753448   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:07.829678   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.329981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:10.343676   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:10.343743   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:10.379546   64758 cri.go:89] found id: ""
	I0804 00:17:10.379575   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.379586   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:10.379594   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:10.379657   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:10.416247   64758 cri.go:89] found id: ""
	I0804 00:17:10.416271   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.416279   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:10.416284   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:10.416340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:10.455261   64758 cri.go:89] found id: ""
	I0804 00:17:10.455291   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.455303   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:10.455310   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:10.455373   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:10.493220   64758 cri.go:89] found id: ""
	I0804 00:17:10.493251   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.493262   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:10.493270   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:10.493329   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:10.538682   64758 cri.go:89] found id: ""
	I0804 00:17:10.538709   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.538720   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:10.538727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:10.538787   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:10.575509   64758 cri.go:89] found id: ""
	I0804 00:17:10.575535   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.575546   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:10.575553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:10.575609   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:10.613163   64758 cri.go:89] found id: ""
	I0804 00:17:10.613188   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.613196   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:10.613201   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:10.613260   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:10.648914   64758 cri.go:89] found id: ""
	I0804 00:17:10.648940   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.648947   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:10.648956   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:10.648968   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:10.700151   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:10.700187   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:10.714971   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:10.714998   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:10.787679   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.787698   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:10.787710   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:10.865008   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:10.865048   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.611770   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:10.110299   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:09.327569   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:11.327855   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:11.603427   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:14.100524   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:13.406150   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:13.419602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:13.419659   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:13.456823   64758 cri.go:89] found id: ""
	I0804 00:17:13.456852   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.456863   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:13.456870   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:13.456935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:13.493527   64758 cri.go:89] found id: ""
	I0804 00:17:13.493556   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.493567   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:13.493574   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:13.493697   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:13.529745   64758 cri.go:89] found id: ""
	I0804 00:17:13.529770   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.529784   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:13.529790   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:13.529856   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:13.567775   64758 cri.go:89] found id: ""
	I0804 00:17:13.567811   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.567819   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:13.567824   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:13.567888   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:13.604638   64758 cri.go:89] found id: ""
	I0804 00:17:13.604670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.604678   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:13.604685   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:13.604741   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:13.646638   64758 cri.go:89] found id: ""
	I0804 00:17:13.646670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.646679   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:13.646684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:13.646730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:13.694656   64758 cri.go:89] found id: ""
	I0804 00:17:13.694682   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.694693   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:13.694701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:13.694761   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:13.733738   64758 cri.go:89] found id: ""
	I0804 00:17:13.733762   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.733771   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:13.733780   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:13.733792   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:13.749747   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:13.749775   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:13.832826   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:13.832852   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:13.832868   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:13.914198   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:13.914233   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:13.952753   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:13.952787   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.503600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:16.516932   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:16.517004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:16.552012   64758 cri.go:89] found id: ""
	I0804 00:17:16.552037   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.552046   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:16.552052   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:16.552110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:16.590626   64758 cri.go:89] found id: ""
	I0804 00:17:16.590653   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.590660   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:16.590666   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:16.590732   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:16.628684   64758 cri.go:89] found id: ""
	I0804 00:17:16.628712   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.628723   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:16.628729   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:16.628792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:16.664934   64758 cri.go:89] found id: ""
	I0804 00:17:16.664969   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.664980   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:16.664987   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:16.665054   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:16.700098   64758 cri.go:89] found id: ""
	I0804 00:17:16.700127   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.700138   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:16.700144   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:16.700214   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:16.736761   64758 cri.go:89] found id: ""
	I0804 00:17:16.736786   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.736795   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:16.736800   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:16.736863   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:16.780010   64758 cri.go:89] found id: ""
	I0804 00:17:16.780033   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.780045   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:16.780050   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:16.780106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:16.816079   64758 cri.go:89] found id: ""
	I0804 00:17:16.816103   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.816112   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:16.816122   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:16.816136   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.866526   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:16.866560   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:16.881254   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:16.881287   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:17:12.610907   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:14.610978   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.611860   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:13.827860   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.327167   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.601482   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:19.100152   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	W0804 00:17:16.952491   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:16.952515   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:16.952530   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:17.038943   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:17.038977   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:19.580078   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:19.595538   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:19.595601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:19.632206   64758 cri.go:89] found id: ""
	I0804 00:17:19.632234   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.632245   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:19.632252   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:19.632307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:19.670335   64758 cri.go:89] found id: ""
	I0804 00:17:19.670362   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.670377   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:19.670388   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:19.670447   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:19.707772   64758 cri.go:89] found id: ""
	I0804 00:17:19.707801   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.707812   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:19.707818   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:19.707877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:19.743822   64758 cri.go:89] found id: ""
	I0804 00:17:19.743855   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.743867   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:19.743874   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:19.743930   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:19.781592   64758 cri.go:89] found id: ""
	I0804 00:17:19.781622   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.781632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:19.781640   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:19.781698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:19.818792   64758 cri.go:89] found id: ""
	I0804 00:17:19.818815   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.818823   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:19.818829   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:19.818877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:19.856486   64758 cri.go:89] found id: ""
	I0804 00:17:19.856511   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.856522   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:19.856528   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:19.856586   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:19.901721   64758 cri.go:89] found id: ""
	I0804 00:17:19.901743   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.901754   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:19.901764   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:19.901780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:19.980095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:19.980119   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:19.980134   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:20.072699   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:20.072750   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:20.159007   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:20.159038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:20.211785   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:20.211818   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:19.110218   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:21.110572   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:18.828527   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:20.828554   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:21.600968   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:23.602526   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.603220   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:22.727235   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:22.740922   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:22.740996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:22.780356   64758 cri.go:89] found id: ""
	I0804 00:17:22.780381   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.780392   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:22.780400   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:22.780459   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:22.817075   64758 cri.go:89] found id: ""
	I0804 00:17:22.817100   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.817111   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:22.817119   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:22.817182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:22.857213   64758 cri.go:89] found id: ""
	I0804 00:17:22.857243   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.857253   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:22.857260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:22.857325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:22.894049   64758 cri.go:89] found id: ""
	I0804 00:17:22.894085   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.894096   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:22.894104   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:22.894171   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:22.929718   64758 cri.go:89] found id: ""
	I0804 00:17:22.929746   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.929756   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:22.929770   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:22.929843   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:22.964863   64758 cri.go:89] found id: ""
	I0804 00:17:22.964892   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.964901   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:22.964907   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:22.964958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:23.002565   64758 cri.go:89] found id: ""
	I0804 00:17:23.002593   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.002603   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:23.002611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:23.002676   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:23.038161   64758 cri.go:89] found id: ""
	I0804 00:17:23.038188   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.038199   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:23.038211   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:23.038224   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:23.091865   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:23.091903   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:23.108358   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:23.108388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:23.186417   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:23.186438   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:23.186453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.269119   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:23.269161   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:25.812405   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:25.833174   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:25.833253   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:25.881654   64758 cri.go:89] found id: ""
	I0804 00:17:25.881681   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.881690   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:25.881696   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:25.881757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:25.936968   64758 cri.go:89] found id: ""
	I0804 00:17:25.936997   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.937006   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:25.937011   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:25.937066   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:25.972437   64758 cri.go:89] found id: ""
	I0804 00:17:25.972462   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.972470   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:25.972475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:25.972529   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:26.008306   64758 cri.go:89] found id: ""
	I0804 00:17:26.008346   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.008357   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:26.008366   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:26.008435   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:26.045593   64758 cri.go:89] found id: ""
	I0804 00:17:26.045620   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.045632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:26.045639   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:26.045696   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:26.084170   64758 cri.go:89] found id: ""
	I0804 00:17:26.084195   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.084205   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:26.084212   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:26.084272   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:26.122524   64758 cri.go:89] found id: ""
	I0804 00:17:26.122551   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.122559   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:26.122565   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:26.122623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:26.159264   64758 cri.go:89] found id: ""
	I0804 00:17:26.159297   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.159308   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:26.159320   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:26.159337   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:26.205692   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:26.205718   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:26.257286   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:26.257321   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:26.271582   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:26.271611   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:26.344562   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:26.344586   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:26.344598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.112800   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.610507   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:23.327294   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.828519   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:28.100160   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:30.100351   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:28.929410   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:28.943941   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:28.944003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:28.986127   64758 cri.go:89] found id: ""
	I0804 00:17:28.986157   64758 logs.go:276] 0 containers: []
	W0804 00:17:28.986169   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:28.986176   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:28.986237   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:29.025528   64758 cri.go:89] found id: ""
	I0804 00:17:29.025556   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.025564   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:29.025570   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:29.025624   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:29.059525   64758 cri.go:89] found id: ""
	I0804 00:17:29.059553   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.059561   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:29.059566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:29.059614   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:29.097451   64758 cri.go:89] found id: ""
	I0804 00:17:29.097489   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.097499   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:29.097506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:29.097564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:29.135504   64758 cri.go:89] found id: ""
	I0804 00:17:29.135532   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.135540   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:29.135546   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:29.135601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:29.175277   64758 cri.go:89] found id: ""
	I0804 00:17:29.175314   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.175324   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:29.175332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:29.175391   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:29.210275   64758 cri.go:89] found id: ""
	I0804 00:17:29.210303   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.210314   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:29.210321   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:29.210382   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:29.246138   64758 cri.go:89] found id: ""
	I0804 00:17:29.246174   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.246186   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:29.246196   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:29.246213   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:29.298935   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:29.298971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:29.313342   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:29.313388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:29.384609   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:29.384635   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:29.384650   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:29.461759   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:29.461795   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:27.611021   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:29.612149   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:27.831367   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:30.327878   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.328772   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.101073   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.600832   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.010152   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:32.023609   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:32.023677   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:32.062480   64758 cri.go:89] found id: ""
	I0804 00:17:32.062508   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.062517   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:32.062523   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:32.062590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:32.099601   64758 cri.go:89] found id: ""
	I0804 00:17:32.099627   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.099634   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:32.099640   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:32.099691   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:32.138651   64758 cri.go:89] found id: ""
	I0804 00:17:32.138680   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.138689   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:32.138694   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:32.138751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:32.182224   64758 cri.go:89] found id: ""
	I0804 00:17:32.182249   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.182257   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:32.182264   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:32.182318   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:32.224381   64758 cri.go:89] found id: ""
	I0804 00:17:32.224410   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.224421   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:32.224429   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:32.224486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:32.261569   64758 cri.go:89] found id: ""
	I0804 00:17:32.261600   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.261609   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:32.261615   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:32.261663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:32.304769   64758 cri.go:89] found id: ""
	I0804 00:17:32.304793   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.304807   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:32.304814   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:32.304867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:32.348695   64758 cri.go:89] found id: ""
	I0804 00:17:32.348727   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.348736   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:32.348745   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:32.348757   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.389444   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:32.389473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:32.442901   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:32.442938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:32.457562   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:32.457588   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:32.529121   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:32.529144   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:32.529160   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.114712   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:35.129725   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:35.129795   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:35.167226   64758 cri.go:89] found id: ""
	I0804 00:17:35.167248   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.167257   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:35.167262   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:35.167310   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:35.200889   64758 cri.go:89] found id: ""
	I0804 00:17:35.200914   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.200922   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:35.200927   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:35.201000   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:35.234899   64758 cri.go:89] found id: ""
	I0804 00:17:35.234927   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.234938   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:35.234945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:35.235003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:35.271355   64758 cri.go:89] found id: ""
	I0804 00:17:35.271393   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.271405   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:35.271412   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:35.271471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:35.313557   64758 cri.go:89] found id: ""
	I0804 00:17:35.313585   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.313595   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:35.313602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:35.313663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:35.352931   64758 cri.go:89] found id: ""
	I0804 00:17:35.352960   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.352971   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:35.352979   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:35.353046   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:35.391202   64758 cri.go:89] found id: ""
	I0804 00:17:35.391232   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.391256   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:35.391263   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:35.391337   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:35.427599   64758 cri.go:89] found id: ""
	I0804 00:17:35.427627   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.427638   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:35.427649   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:35.427666   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:35.482025   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:35.482061   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:35.498274   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:35.498303   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:35.572606   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:35.572631   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:35.572644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.655534   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:35.655566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.114835   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.610785   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.827077   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:36.827108   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:36.601588   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:38.602210   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:40.602295   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:38.205756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:38.218974   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:38.219044   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:38.253798   64758 cri.go:89] found id: ""
	I0804 00:17:38.253827   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.253839   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:38.253852   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:38.253911   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:38.291074   64758 cri.go:89] found id: ""
	I0804 00:17:38.291102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.291113   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:38.291120   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:38.291182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:38.332097   64758 cri.go:89] found id: ""
	I0804 00:17:38.332123   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.332133   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:38.332140   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:38.332198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:38.370074   64758 cri.go:89] found id: ""
	I0804 00:17:38.370102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.370110   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:38.370117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:38.370176   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:38.406962   64758 cri.go:89] found id: ""
	I0804 00:17:38.406984   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.406993   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:38.406998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:38.407051   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:38.447532   64758 cri.go:89] found id: ""
	I0804 00:17:38.447562   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.447572   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:38.447579   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:38.447653   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:38.484326   64758 cri.go:89] found id: ""
	I0804 00:17:38.484356   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.484368   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:38.484375   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:38.484444   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:38.521831   64758 cri.go:89] found id: ""
	I0804 00:17:38.521858   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.521869   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:38.521880   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:38.521893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:38.570540   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:38.570569   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:38.624921   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:38.624953   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:38.639451   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:38.639477   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:38.714435   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:38.714459   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:38.714475   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:41.295160   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:41.310032   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:41.310108   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:41.350363   64758 cri.go:89] found id: ""
	I0804 00:17:41.350393   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.350404   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:41.350412   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:41.350475   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:41.391662   64758 cri.go:89] found id: ""
	I0804 00:17:41.391691   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.391698   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:41.391703   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:41.391760   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:41.429653   64758 cri.go:89] found id: ""
	I0804 00:17:41.429678   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.429686   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:41.429692   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:41.429739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:41.469456   64758 cri.go:89] found id: ""
	I0804 00:17:41.469483   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.469494   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:41.469505   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:41.469566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:41.506124   64758 cri.go:89] found id: ""
	I0804 00:17:41.506154   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.506164   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:41.506171   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:41.506234   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:41.543139   64758 cri.go:89] found id: ""
	I0804 00:17:41.543171   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.543182   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:41.543190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:41.543252   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:41.580537   64758 cri.go:89] found id: ""
	I0804 00:17:41.580568   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.580578   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:41.580585   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:41.580652   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:41.619828   64758 cri.go:89] found id: ""
	I0804 00:17:41.619854   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.619862   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:41.619869   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:41.619882   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:41.660749   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:41.660780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:41.712889   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:41.712924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:41.726422   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:41.726447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:41.805673   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:41.805697   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:41.805712   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:37.110193   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:39.110927   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:41.111203   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:39.327800   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:41.327910   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:43.099815   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:45.101262   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:44.386563   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:44.399891   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:44.399954   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:44.434270   64758 cri.go:89] found id: ""
	I0804 00:17:44.434297   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.434305   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:44.434311   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:44.434372   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:44.469423   64758 cri.go:89] found id: ""
	I0804 00:17:44.469454   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.469463   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:44.469468   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:44.469535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:44.505511   64758 cri.go:89] found id: ""
	I0804 00:17:44.505539   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.505547   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:44.505553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:44.505602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:44.540897   64758 cri.go:89] found id: ""
	I0804 00:17:44.540922   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.540932   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:44.540937   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:44.540996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:44.578722   64758 cri.go:89] found id: ""
	I0804 00:17:44.578747   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.578755   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:44.578760   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:44.578812   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:44.615838   64758 cri.go:89] found id: ""
	I0804 00:17:44.615863   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.615874   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:44.615881   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:44.615940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:44.657695   64758 cri.go:89] found id: ""
	I0804 00:17:44.657724   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.657734   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:44.657741   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:44.657916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:44.695852   64758 cri.go:89] found id: ""
	I0804 00:17:44.695882   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.695892   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:44.695901   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:44.695912   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:44.754643   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:44.754687   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:44.773964   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:44.773994   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:44.857544   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:44.857567   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:44.857583   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:44.952987   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:44.953027   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:43.610772   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:45.611480   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:43.827218   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:46.327323   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:47.600755   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.099574   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:47.504957   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:47.520153   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:47.520232   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:47.557303   64758 cri.go:89] found id: ""
	I0804 00:17:47.557326   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.557334   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:47.557339   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:47.557410   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:47.595626   64758 cri.go:89] found id: ""
	I0804 00:17:47.595655   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.595665   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:47.595675   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:47.595733   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:47.633430   64758 cri.go:89] found id: ""
	I0804 00:17:47.633458   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.633466   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:47.633472   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:47.633525   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:47.670116   64758 cri.go:89] found id: ""
	I0804 00:17:47.670140   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.670149   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:47.670154   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:47.670200   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:47.709019   64758 cri.go:89] found id: ""
	I0804 00:17:47.709042   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.709050   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:47.709055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:47.709111   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:47.745230   64758 cri.go:89] found id: ""
	I0804 00:17:47.745251   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.745259   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:47.745265   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:47.745319   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:47.787957   64758 cri.go:89] found id: ""
	I0804 00:17:47.787985   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.787996   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:47.788004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:47.788063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:47.821451   64758 cri.go:89] found id: ""
	I0804 00:17:47.821477   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.821488   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:47.821498   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:47.821516   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:47.903035   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:47.903139   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:47.903162   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:47.986659   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:47.986702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:48.037921   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:48.037951   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:48.095354   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:48.095389   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:50.613264   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:50.627717   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:50.627792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:50.669311   64758 cri.go:89] found id: ""
	I0804 00:17:50.669338   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.669347   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:50.669370   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:50.669438   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:50.714674   64758 cri.go:89] found id: ""
	I0804 00:17:50.714704   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.714713   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:50.714718   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:50.714769   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:50.755291   64758 cri.go:89] found id: ""
	I0804 00:17:50.755318   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.755326   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:50.755332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:50.755394   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:50.801927   64758 cri.go:89] found id: ""
	I0804 00:17:50.801955   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.801964   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:50.801970   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:50.802020   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:50.845096   64758 cri.go:89] found id: ""
	I0804 00:17:50.845121   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.845130   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:50.845136   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:50.845193   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:50.882664   64758 cri.go:89] found id: ""
	I0804 00:17:50.882694   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.882705   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:50.882712   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:50.882771   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:50.921233   64758 cri.go:89] found id: ""
	I0804 00:17:50.921260   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.921268   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:50.921273   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:50.921326   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:50.955254   64758 cri.go:89] found id: ""
	I0804 00:17:50.955286   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.955298   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:50.955311   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:50.955329   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:51.010001   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:51.010037   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:51.024943   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:51.024966   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:51.096095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:51.096123   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:51.096139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:51.177829   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:51.177864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:47.611778   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.110408   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:48.328693   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.828022   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:52.609609   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:55.100616   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:53.720665   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:53.736318   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:53.736380   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:53.772887   64758 cri.go:89] found id: ""
	I0804 00:17:53.772916   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.772926   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:53.772934   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:53.772995   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:53.811771   64758 cri.go:89] found id: ""
	I0804 00:17:53.811797   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.811837   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:53.811845   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:53.811906   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:53.846684   64758 cri.go:89] found id: ""
	I0804 00:17:53.846716   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.846726   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:53.846736   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:53.846798   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:53.883550   64758 cri.go:89] found id: ""
	I0804 00:17:53.883581   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.883592   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:53.883600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:53.883662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:53.921031   64758 cri.go:89] found id: ""
	I0804 00:17:53.921061   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.921072   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:53.921080   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:53.921153   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:53.960338   64758 cri.go:89] found id: ""
	I0804 00:17:53.960364   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.960374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:53.960381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:53.960441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:53.998404   64758 cri.go:89] found id: ""
	I0804 00:17:53.998434   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.998450   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:53.998458   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:53.998520   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:54.033417   64758 cri.go:89] found id: ""
	I0804 00:17:54.033444   64758 logs.go:276] 0 containers: []
	W0804 00:17:54.033453   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:54.033461   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:54.033473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:54.071945   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:54.071971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:54.124614   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:54.124644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:54.140757   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:54.140783   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:54.241735   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:54.241754   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:54.241769   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:56.821591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:56.836569   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:56.836631   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:56.872013   64758 cri.go:89] found id: ""
	I0804 00:17:56.872039   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.872048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:56.872054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:56.872110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:52.612077   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:55.111566   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:52.828335   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:54.830625   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:56.831382   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:57.101663   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.600253   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:56.908022   64758 cri.go:89] found id: ""
	I0804 00:17:56.908051   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.908061   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:56.908067   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:56.908114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:56.943309   64758 cri.go:89] found id: ""
	I0804 00:17:56.943336   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.943347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:56.943359   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:56.943415   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:56.977799   64758 cri.go:89] found id: ""
	I0804 00:17:56.977839   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.977847   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:56.977853   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:56.977916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:57.015185   64758 cri.go:89] found id: ""
	I0804 00:17:57.015213   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.015223   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:57.015237   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:57.015295   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:57.051856   64758 cri.go:89] found id: ""
	I0804 00:17:57.051879   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.051887   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:57.051893   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:57.051944   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:57.086349   64758 cri.go:89] found id: ""
	I0804 00:17:57.086376   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.086387   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:57.086393   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:57.086439   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:57.125005   64758 cri.go:89] found id: ""
	I0804 00:17:57.125048   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.125064   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:57.125076   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:57.125090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:57.200348   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:57.200382   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:57.240899   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:57.240924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.294331   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:57.294375   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:57.308388   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:57.308429   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:57.382602   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:59.883070   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:59.897055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:59.897116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:59.932983   64758 cri.go:89] found id: ""
	I0804 00:17:59.933012   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.933021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:59.933029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:59.933088   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:59.971781   64758 cri.go:89] found id: ""
	I0804 00:17:59.971807   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.971815   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:59.971820   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:59.971878   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:00.008381   64758 cri.go:89] found id: ""
	I0804 00:18:00.008406   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.008414   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:00.008419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:00.008483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:00.053257   64758 cri.go:89] found id: ""
	I0804 00:18:00.053281   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.053290   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:00.053295   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:00.053342   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:00.089891   64758 cri.go:89] found id: ""
	I0804 00:18:00.089925   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.089936   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:00.089943   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:00.090008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:00.129833   64758 cri.go:89] found id: ""
	I0804 00:18:00.129863   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.129875   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:00.129884   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:00.129942   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:00.181324   64758 cri.go:89] found id: ""
	I0804 00:18:00.181390   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.181403   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:00.181410   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:00.181471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:00.224426   64758 cri.go:89] found id: ""
	I0804 00:18:00.224451   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.224459   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:00.224467   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:00.224481   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:00.240122   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:00.240155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:00.317324   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:00.317346   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:00.317379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:00.398917   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:00.398952   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:00.440730   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:00.440758   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.111741   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.611509   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.327597   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:01.328678   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:02.099384   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:04.100512   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:02.992128   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:03.006787   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:03.006870   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:03.041291   64758 cri.go:89] found id: ""
	I0804 00:18:03.041321   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.041332   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:03.041341   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:03.041427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:03.077822   64758 cri.go:89] found id: ""
	I0804 00:18:03.077851   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.077863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:03.077871   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:03.077928   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:03.116579   64758 cri.go:89] found id: ""
	I0804 00:18:03.116603   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.116611   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:03.116616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:03.116662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:03.154904   64758 cri.go:89] found id: ""
	I0804 00:18:03.154931   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.154942   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:03.154950   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:03.155018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:03.190300   64758 cri.go:89] found id: ""
	I0804 00:18:03.190328   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.190341   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:03.190349   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:03.190413   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:03.225975   64758 cri.go:89] found id: ""
	I0804 00:18:03.226004   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.226016   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:03.226023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:03.226087   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:03.271499   64758 cri.go:89] found id: ""
	I0804 00:18:03.271525   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.271535   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:03.271543   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:03.271602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:03.308643   64758 cri.go:89] found id: ""
	I0804 00:18:03.308668   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.308675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:03.308684   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:03.308698   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:03.324528   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:03.324562   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:03.401102   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:03.401125   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:03.401139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:03.481817   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:03.481864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:03.522568   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:03.522601   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.074678   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:06.089765   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:06.089844   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:06.128372   64758 cri.go:89] found id: ""
	I0804 00:18:06.128400   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.128411   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:06.128419   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:06.128467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:06.169488   64758 cri.go:89] found id: ""
	I0804 00:18:06.169515   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.169525   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:06.169532   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:06.169590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:06.207969   64758 cri.go:89] found id: ""
	I0804 00:18:06.207998   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.208009   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:06.208015   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:06.208067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:06.244497   64758 cri.go:89] found id: ""
	I0804 00:18:06.244521   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.244529   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:06.244535   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:06.244592   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:06.282905   64758 cri.go:89] found id: ""
	I0804 00:18:06.282935   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.282945   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:06.282952   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:06.283013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:06.322498   64758 cri.go:89] found id: ""
	I0804 00:18:06.322523   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.322530   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:06.322537   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:06.322583   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:06.361377   64758 cri.go:89] found id: ""
	I0804 00:18:06.361402   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.361412   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:06.361420   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:06.361485   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:06.402082   64758 cri.go:89] found id: ""
	I0804 00:18:06.402112   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.402120   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:06.402128   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:06.402141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.452052   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:06.452089   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:06.466695   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:06.466734   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:06.546115   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:06.546140   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:06.546155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:06.639671   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:06.639708   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:02.111360   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:04.610774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:06.612557   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:03.330392   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:05.828925   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:06.603713   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:09.100060   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:09.193473   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:09.207696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:09.207755   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:09.247757   64758 cri.go:89] found id: ""
	I0804 00:18:09.247784   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.247795   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:09.247802   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:09.247867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:09.285516   64758 cri.go:89] found id: ""
	I0804 00:18:09.285549   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.285559   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:09.285567   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:09.285628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:09.321689   64758 cri.go:89] found id: ""
	I0804 00:18:09.321715   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.321725   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:09.321732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:09.321789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:09.358135   64758 cri.go:89] found id: ""
	I0804 00:18:09.358158   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.358166   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:09.358176   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:09.358223   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:09.393642   64758 cri.go:89] found id: ""
	I0804 00:18:09.393667   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.393675   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:09.393681   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:09.393730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:09.430651   64758 cri.go:89] found id: ""
	I0804 00:18:09.430674   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.430683   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:09.430689   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:09.430734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:09.472433   64758 cri.go:89] found id: ""
	I0804 00:18:09.472460   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.472469   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:09.472474   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:09.472533   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:09.511147   64758 cri.go:89] found id: ""
	I0804 00:18:09.511171   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.511179   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:09.511187   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:09.511207   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:09.560099   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:09.560142   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:09.574609   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:09.574641   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:09.646863   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:09.646891   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:09.646906   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:09.727309   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:09.727352   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:09.111726   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:11.611445   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:08.329278   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:10.827361   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:11.600326   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:14.099811   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:12.268925   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:12.284737   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:12.284813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:12.326015   64758 cri.go:89] found id: ""
	I0804 00:18:12.326036   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.326044   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:12.326049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:12.326095   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:12.374096   64758 cri.go:89] found id: ""
	I0804 00:18:12.374129   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.374138   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:12.374143   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:12.374235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:12.426467   64758 cri.go:89] found id: ""
	I0804 00:18:12.426493   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.426502   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:12.426509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:12.426570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:12.485034   64758 cri.go:89] found id: ""
	I0804 00:18:12.485060   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.485072   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:12.485079   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:12.485138   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:12.523490   64758 cri.go:89] found id: ""
	I0804 00:18:12.523517   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.523525   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:12.523530   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:12.523577   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:12.563318   64758 cri.go:89] found id: ""
	I0804 00:18:12.563347   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.563358   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:12.563365   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:12.563425   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:12.600455   64758 cri.go:89] found id: ""
	I0804 00:18:12.600482   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.600492   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:12.600499   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:12.600566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:12.641146   64758 cri.go:89] found id: ""
	I0804 00:18:12.641170   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.641178   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:12.641186   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:12.641197   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:12.697240   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:12.697274   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:12.711399   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:12.711432   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:12.794022   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:12.794050   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:12.794067   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:12.881327   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:12.881379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:15.425765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:15.439338   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:15.439420   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:15.477964   64758 cri.go:89] found id: ""
	I0804 00:18:15.477991   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.478002   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:15.478009   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:15.478069   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:15.514554   64758 cri.go:89] found id: ""
	I0804 00:18:15.514574   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.514583   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:15.514588   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:15.514636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:15.549702   64758 cri.go:89] found id: ""
	I0804 00:18:15.549732   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.549741   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:15.549747   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:15.549813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:15.584619   64758 cri.go:89] found id: ""
	I0804 00:18:15.584663   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.584675   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:15.584683   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:15.584746   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:15.625084   64758 cri.go:89] found id: ""
	I0804 00:18:15.625111   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.625121   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:15.625128   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:15.625192   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:15.666629   64758 cri.go:89] found id: ""
	I0804 00:18:15.666655   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.666664   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:15.666673   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:15.666726   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:15.704287   64758 cri.go:89] found id: ""
	I0804 00:18:15.704316   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.704324   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:15.704330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:15.704383   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:15.740629   64758 cri.go:89] found id: ""
	I0804 00:18:15.740659   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.740668   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:15.740678   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:15.740702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:15.794093   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:15.794124   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:15.807629   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:15.807659   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:15.887638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:15.887665   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:15.887680   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:15.972935   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:15.972978   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:13.611758   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:15.613472   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:13.327640   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:15.827432   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:16.100599   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:18.603192   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:18.518022   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:18.532360   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:18.532433   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:18.565519   64758 cri.go:89] found id: ""
	I0804 00:18:18.565544   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.565552   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:18.565557   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:18.565612   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:18.599939   64758 cri.go:89] found id: ""
	I0804 00:18:18.599967   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.599978   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:18.599985   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:18.600055   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:18.639035   64758 cri.go:89] found id: ""
	I0804 00:18:18.639062   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.639070   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:18.639076   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:18.639124   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:18.677188   64758 cri.go:89] found id: ""
	I0804 00:18:18.677210   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.677218   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:18.677223   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:18.677268   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:18.715892   64758 cri.go:89] found id: ""
	I0804 00:18:18.715921   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.715932   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:18.715940   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:18.716005   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:18.752274   64758 cri.go:89] found id: ""
	I0804 00:18:18.752298   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.752307   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:18.752313   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:18.752368   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:18.795251   64758 cri.go:89] found id: ""
	I0804 00:18:18.795279   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.795288   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:18.795293   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:18.795353   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.830842   64758 cri.go:89] found id: ""
	I0804 00:18:18.830866   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.830874   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:18.830882   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:18.830893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:18.883687   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:18.883719   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:18.898406   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:18.898433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:18.973191   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:18.973215   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:18.973231   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:19.054094   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:19.054141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:21.597245   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:21.612534   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:21.612605   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:21.649391   64758 cri.go:89] found id: ""
	I0804 00:18:21.649415   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.649426   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:21.649434   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:21.649492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:21.683202   64758 cri.go:89] found id: ""
	I0804 00:18:21.683226   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.683233   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:21.683244   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:21.683300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:21.717450   64758 cri.go:89] found id: ""
	I0804 00:18:21.717475   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.717484   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:21.717489   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:21.717547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:21.752559   64758 cri.go:89] found id: ""
	I0804 00:18:21.752588   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.752596   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:21.752602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:21.752650   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:21.788336   64758 cri.go:89] found id: ""
	I0804 00:18:21.788364   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.788375   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:21.788381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:21.788428   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:21.829404   64758 cri.go:89] found id: ""
	I0804 00:18:21.829428   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.829436   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:21.829443   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:21.829502   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:21.869473   64758 cri.go:89] found id: ""
	I0804 00:18:21.869504   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.869515   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:21.869521   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:21.869587   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.111377   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:20.610253   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:17.827889   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:20.327830   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:21.100486   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:23.599788   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:25.601620   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:21.909883   64758 cri.go:89] found id: ""
	I0804 00:18:21.909907   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.909915   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:21.909923   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:21.909940   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:21.925038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:21.925071   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:22.000261   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.000281   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:22.000294   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:22.082813   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:22.082846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:22.126741   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:22.126774   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:24.677246   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:24.692404   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:24.692467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:24.739001   64758 cri.go:89] found id: ""
	I0804 00:18:24.739039   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.739049   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:24.739054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:24.739119   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:24.779558   64758 cri.go:89] found id: ""
	I0804 00:18:24.779586   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.779597   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:24.779605   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:24.779664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:24.819257   64758 cri.go:89] found id: ""
	I0804 00:18:24.819284   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.819295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:24.819301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:24.819363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:24.862504   64758 cri.go:89] found id: ""
	I0804 00:18:24.862531   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.862539   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:24.862544   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:24.862599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:24.899605   64758 cri.go:89] found id: ""
	I0804 00:18:24.899637   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.899649   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:24.899656   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:24.899716   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:24.936575   64758 cri.go:89] found id: ""
	I0804 00:18:24.936604   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.936612   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:24.936618   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:24.936667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:24.971736   64758 cri.go:89] found id: ""
	I0804 00:18:24.971764   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.971775   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:24.971782   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:24.971851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:25.010214   64758 cri.go:89] found id: ""
	I0804 00:18:25.010244   64758 logs.go:276] 0 containers: []
	W0804 00:18:25.010253   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:25.010265   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:25.010279   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:25.091145   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:25.091186   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:25.137574   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:25.137603   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:25.189559   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:25.189593   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:25.204725   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:25.204763   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:25.278903   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.612077   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:25.111666   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:22.827542   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:24.829587   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:27.326922   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:28.100576   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:30.603955   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:27.779500   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:27.793548   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:27.793628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:27.830811   64758 cri.go:89] found id: ""
	I0804 00:18:27.830844   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.830854   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:27.830862   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:27.830919   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:27.869966   64758 cri.go:89] found id: ""
	I0804 00:18:27.869991   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.869998   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:27.870004   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:27.870062   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:27.909474   64758 cri.go:89] found id: ""
	I0804 00:18:27.909496   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.909504   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:27.909509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:27.909567   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:27.948588   64758 cri.go:89] found id: ""
	I0804 00:18:27.948613   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.948625   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:27.948632   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:27.948704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:27.991957   64758 cri.go:89] found id: ""
	I0804 00:18:27.991979   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.991987   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:27.991993   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:27.992052   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:28.029516   64758 cri.go:89] found id: ""
	I0804 00:18:28.029544   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.029555   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:28.029562   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:28.029627   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:28.067851   64758 cri.go:89] found id: ""
	I0804 00:18:28.067879   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.067891   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:28.067898   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:28.067957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:28.107488   64758 cri.go:89] found id: ""
	I0804 00:18:28.107514   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.107524   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:28.107534   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:28.107548   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:28.158490   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:28.158523   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:28.172000   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:28.172030   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:28.247803   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:28.247823   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:28.247839   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:28.326695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:28.326727   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:30.867241   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:30.881074   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:30.881146   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:30.919078   64758 cri.go:89] found id: ""
	I0804 00:18:30.919105   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.919115   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:30.919122   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:30.919184   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:30.954436   64758 cri.go:89] found id: ""
	I0804 00:18:30.954463   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.954474   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:30.954481   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:30.954546   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:30.993080   64758 cri.go:89] found id: ""
	I0804 00:18:30.993110   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.993121   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:30.993129   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:30.993188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:31.031465   64758 cri.go:89] found id: ""
	I0804 00:18:31.031493   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.031504   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:31.031512   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:31.031570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:31.067374   64758 cri.go:89] found id: ""
	I0804 00:18:31.067405   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.067416   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:31.067423   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:31.067493   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:31.104021   64758 cri.go:89] found id: ""
	I0804 00:18:31.104048   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.104059   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:31.104066   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:31.104128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:31.146995   64758 cri.go:89] found id: ""
	I0804 00:18:31.147023   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.147033   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:31.147040   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:31.147106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:31.184708   64758 cri.go:89] found id: ""
	I0804 00:18:31.184739   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.184749   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:31.184760   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:31.184776   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:31.237743   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:31.237781   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:31.252038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:31.252070   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:31.326357   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:31.326380   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:31.326401   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:31.408212   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:31.408256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:27.610666   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:29.610899   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:31.611472   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:29.827980   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:32.326666   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:33.099814   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:35.100740   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:33.954396   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:33.968311   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:33.968384   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:34.006574   64758 cri.go:89] found id: ""
	I0804 00:18:34.006605   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.006625   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:34.006635   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:34.006698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:34.042400   64758 cri.go:89] found id: ""
	I0804 00:18:34.042427   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.042435   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:34.042441   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:34.042492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:34.080769   64758 cri.go:89] found id: ""
	I0804 00:18:34.080793   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.080804   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:34.080810   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:34.080877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:34.118283   64758 cri.go:89] found id: ""
	I0804 00:18:34.118311   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.118320   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:34.118326   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:34.118377   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:34.153679   64758 cri.go:89] found id: ""
	I0804 00:18:34.153708   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.153719   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:34.153727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:34.153780   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:34.189618   64758 cri.go:89] found id: ""
	I0804 00:18:34.189674   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.189686   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:34.189696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:34.189770   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:34.224628   64758 cri.go:89] found id: ""
	I0804 00:18:34.224666   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.224677   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:34.224684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:34.224744   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:34.265343   64758 cri.go:89] found id: ""
	I0804 00:18:34.265389   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.265399   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:34.265409   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:34.265428   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:34.337992   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:34.338014   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:34.338025   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:34.420224   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:34.420263   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:34.462009   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:34.462042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:34.520087   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:34.520120   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:34.111351   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:36.112271   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:34.328807   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:36.827190   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:37.599447   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:40.099291   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:37.035398   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:37.048955   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:37.049024   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:37.087433   64758 cri.go:89] found id: ""
	I0804 00:18:37.087460   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.087470   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:37.087478   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:37.087542   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:37.128227   64758 cri.go:89] found id: ""
	I0804 00:18:37.128255   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.128267   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:37.128275   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:37.128328   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:37.165371   64758 cri.go:89] found id: ""
	I0804 00:18:37.165405   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.165415   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:37.165424   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:37.165486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:37.201168   64758 cri.go:89] found id: ""
	I0804 00:18:37.201198   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.201209   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:37.201217   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:37.201278   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:37.237378   64758 cri.go:89] found id: ""
	I0804 00:18:37.237406   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.237414   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:37.237419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:37.237465   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:37.273425   64758 cri.go:89] found id: ""
	I0804 00:18:37.273456   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.273467   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:37.273475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:37.273547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:37.313019   64758 cri.go:89] found id: ""
	I0804 00:18:37.313048   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.313056   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:37.313061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:37.313116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:37.354741   64758 cri.go:89] found id: ""
	I0804 00:18:37.354771   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.354779   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:37.354788   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:37.354800   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:37.408703   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:37.408740   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:37.423393   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:37.423419   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:37.497460   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:37.497487   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:37.497501   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:37.579811   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:37.579856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:40.122872   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:40.139106   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:40.139177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:40.178571   64758 cri.go:89] found id: ""
	I0804 00:18:40.178599   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.178610   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:40.178617   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:40.178679   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:40.215680   64758 cri.go:89] found id: ""
	I0804 00:18:40.215714   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.215722   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:40.215728   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:40.215776   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:40.250618   64758 cri.go:89] found id: ""
	I0804 00:18:40.250647   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.250658   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:40.250666   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:40.250729   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:40.289195   64758 cri.go:89] found id: ""
	I0804 00:18:40.289223   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.289233   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:40.289240   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:40.289296   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:40.330961   64758 cri.go:89] found id: ""
	I0804 00:18:40.330988   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.330998   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:40.331006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:40.331056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:40.376435   64758 cri.go:89] found id: ""
	I0804 00:18:40.376465   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.376478   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:40.376487   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:40.376558   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:40.416415   64758 cri.go:89] found id: ""
	I0804 00:18:40.416447   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.416459   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:40.416465   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:40.416535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:40.452958   64758 cri.go:89] found id: ""
	I0804 00:18:40.452996   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.453007   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:40.453018   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:40.453036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:40.503775   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:40.503808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:40.517825   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:40.517855   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:40.587818   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:40.587847   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:40.587861   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:40.674139   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:40.674183   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:38.611068   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:40.611923   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:39.326489   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:41.327327   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:42.100795   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:44.602441   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:43.217266   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:43.232190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:43.232262   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:43.270127   64758 cri.go:89] found id: ""
	I0804 00:18:43.270156   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.270163   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:43.270169   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:43.270219   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:43.309401   64758 cri.go:89] found id: ""
	I0804 00:18:43.309429   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.309439   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:43.309446   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:43.309503   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:43.347210   64758 cri.go:89] found id: ""
	I0804 00:18:43.347235   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.347242   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:43.347247   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:43.347300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:43.382548   64758 cri.go:89] found id: ""
	I0804 00:18:43.382578   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.382588   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:43.382595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:43.382658   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:43.422076   64758 cri.go:89] found id: ""
	I0804 00:18:43.422102   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.422113   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:43.422121   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:43.422168   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:43.458525   64758 cri.go:89] found id: ""
	I0804 00:18:43.458552   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.458560   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:43.458566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:43.458623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:43.498134   64758 cri.go:89] found id: ""
	I0804 00:18:43.498157   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.498165   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:43.498170   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:43.498217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:43.543289   64758 cri.go:89] found id: ""
	I0804 00:18:43.543312   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.543320   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:43.543328   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:43.543338   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:43.593489   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:43.593521   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:43.607838   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:43.607869   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:43.682791   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:43.682813   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:43.682826   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:43.761695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:43.761737   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.305385   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:46.320003   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:46.320063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:46.367941   64758 cri.go:89] found id: ""
	I0804 00:18:46.367969   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.367980   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:46.367986   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:46.368058   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:46.422540   64758 cri.go:89] found id: ""
	I0804 00:18:46.422563   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.422572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:46.422578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:46.422637   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:46.470192   64758 cri.go:89] found id: ""
	I0804 00:18:46.470238   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.470248   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:46.470257   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:46.470316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:46.512375   64758 cri.go:89] found id: ""
	I0804 00:18:46.512399   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.512408   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:46.512413   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:46.512471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:46.546547   64758 cri.go:89] found id: ""
	I0804 00:18:46.546580   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.546592   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:46.546600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:46.546665   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:46.583598   64758 cri.go:89] found id: ""
	I0804 00:18:46.583621   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.583630   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:46.583636   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:46.583692   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:46.621066   64758 cri.go:89] found id: ""
	I0804 00:18:46.621101   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.621116   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:46.621122   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:46.621177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:46.654115   64758 cri.go:89] found id: ""
	I0804 00:18:46.654149   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.654162   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:46.654174   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:46.654191   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:46.738542   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:46.738582   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.778894   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:46.778923   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:46.833225   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:46.833257   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:46.847222   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:46.847247   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:18:42.612522   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:45.110927   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:43.327420   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:45.327936   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:47.328380   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:46.604576   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.100232   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	W0804 00:18:46.922590   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.423639   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:49.437417   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:49.437490   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:49.474889   64758 cri.go:89] found id: ""
	I0804 00:18:49.474914   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.474923   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:49.474929   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:49.474986   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:49.512860   64758 cri.go:89] found id: ""
	I0804 00:18:49.512889   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.512900   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:49.512908   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:49.512965   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:49.550558   64758 cri.go:89] found id: ""
	I0804 00:18:49.550594   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.550603   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:49.550611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:49.550671   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:49.587779   64758 cri.go:89] found id: ""
	I0804 00:18:49.587810   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.587823   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:49.587831   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:49.587890   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:49.630307   64758 cri.go:89] found id: ""
	I0804 00:18:49.630333   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.630344   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:49.630352   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:49.630411   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:49.665013   64758 cri.go:89] found id: ""
	I0804 00:18:49.665046   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.665057   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:49.665064   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:49.665127   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:49.701375   64758 cri.go:89] found id: ""
	I0804 00:18:49.701401   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.701410   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:49.701415   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:49.701472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:49.737237   64758 cri.go:89] found id: ""
	I0804 00:18:49.737261   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.737269   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:49.737278   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:49.737291   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:49.790998   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:49.791033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:49.804933   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:49.804965   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:49.877997   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.878019   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:49.878035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:49.963836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:49.963872   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:47.611774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.612581   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.616560   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.827900   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.829950   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.599613   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:53.600496   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:52.506621   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:52.521482   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:52.521553   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:52.555980   64758 cri.go:89] found id: ""
	I0804 00:18:52.556010   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.556021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:52.556029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:52.556094   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:52.593088   64758 cri.go:89] found id: ""
	I0804 00:18:52.593119   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.593130   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:52.593138   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:52.593197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:52.632058   64758 cri.go:89] found id: ""
	I0804 00:18:52.632088   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.632107   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:52.632115   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:52.632177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:52.668701   64758 cri.go:89] found id: ""
	I0804 00:18:52.668730   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.668739   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:52.668745   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:52.668814   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:52.705041   64758 cri.go:89] found id: ""
	I0804 00:18:52.705068   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.705075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:52.705089   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:52.705149   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:52.743304   64758 cri.go:89] found id: ""
	I0804 00:18:52.743327   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.743335   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:52.743340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:52.743397   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:52.781020   64758 cri.go:89] found id: ""
	I0804 00:18:52.781050   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.781060   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:52.781073   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:52.781134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:52.820979   64758 cri.go:89] found id: ""
	I0804 00:18:52.821004   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.821014   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:52.821024   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:52.821042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:52.876450   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:52.876488   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:52.890529   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:52.890566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:52.960682   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:52.960710   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:52.960725   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:53.044000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:53.044040   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:55.601594   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:55.615574   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:55.615645   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:55.655116   64758 cri.go:89] found id: ""
	I0804 00:18:55.655146   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.655157   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:55.655164   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:55.655217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:55.695809   64758 cri.go:89] found id: ""
	I0804 00:18:55.695837   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.695846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:55.695851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:55.695909   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:55.732784   64758 cri.go:89] found id: ""
	I0804 00:18:55.732811   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.732822   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:55.732828   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:55.732920   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:55.773316   64758 cri.go:89] found id: ""
	I0804 00:18:55.773338   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.773347   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:55.773368   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:55.773416   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:55.808886   64758 cri.go:89] found id: ""
	I0804 00:18:55.808913   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.808924   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:55.808931   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:55.808990   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:55.848471   64758 cri.go:89] found id: ""
	I0804 00:18:55.848499   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.848507   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:55.848513   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:55.848568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:55.884088   64758 cri.go:89] found id: ""
	I0804 00:18:55.884117   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.884128   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:55.884134   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:55.884194   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:55.918194   64758 cri.go:89] found id: ""
	I0804 00:18:55.918222   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.918233   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:55.918243   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:55.918264   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:55.932685   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:55.932717   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:56.003817   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:56.003840   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:56.003856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:56.087804   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:56.087846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:56.129959   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:56.129993   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:54.111584   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.610664   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:54.327283   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.328332   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.100620   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.601669   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:00.604763   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.685077   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:58.698624   64758 kubeadm.go:597] duration metric: took 4m4.179874556s to restartPrimaryControlPlane
	W0804 00:18:58.698704   64758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:18:58.698731   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:18:58.611004   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:00.611252   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.828188   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:01.329218   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.100214   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:05.101275   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.967117   64758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.268366381s)
	I0804 00:19:03.967202   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:19:03.982098   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:19:03.991994   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:19:04.002780   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:19:04.002802   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:19:04.002845   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:19:04.012216   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:19:04.012279   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:19:04.021463   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:19:04.030689   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:19:04.030743   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:19:04.040801   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.050496   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:19:04.050558   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.060782   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:19:04.071595   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:19:04.071673   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:19:04.082123   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:19:04.313172   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:19:02.611712   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:05.111575   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.827427   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:06.327317   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:07.599775   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:09.599814   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:07.611608   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:10.110194   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:08.333681   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:10.829626   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:11.601081   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:14.099098   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:12.110388   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:14.111401   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:16.610774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:13.327035   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:15.327695   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:17.327749   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:16.100543   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:18.602723   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:20.603470   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:18.611336   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:21.111798   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:19.329120   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:21.826869   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:22.605600   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:25.101500   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:23.610581   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:25.610814   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:24.326982   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:26.827772   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:27.599557   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:29.600283   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:28.110748   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:30.111027   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:29.327031   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:31.328581   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:32.101571   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:34.601251   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:32.610784   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:34.612611   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:33.828237   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:35.828319   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:37.099717   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:39.100492   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:37.111009   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:39.610805   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:38.326730   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:40.327548   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:42.330066   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:41.600239   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:43.600686   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:45.601464   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:42.110900   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:44.610221   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:45.605124   65087 pod_ready.go:81] duration metric: took 4m0.000843677s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" ...
	E0804 00:19:45.605152   65087 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0804 00:19:45.605175   65087 pod_ready.go:38] duration metric: took 4m13.615224515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:19:45.605208   65087 kubeadm.go:597] duration metric: took 4m21.736484018s to restartPrimaryControlPlane
	W0804 00:19:45.605273   65087 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:19:45.605304   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:19:44.827547   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:47.329541   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:48.101237   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:50.603754   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:49.826561   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:51.828643   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:53.100714   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:55.102037   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:53.832996   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:54.830906   65441 pod_ready.go:81] duration metric: took 4m0.010324747s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	E0804 00:19:54.830936   65441 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0804 00:19:54.830947   65441 pod_ready.go:38] duration metric: took 4m4.842701336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:19:54.830968   65441 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:19:54.831003   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:19:54.831070   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:19:54.887772   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:54.887804   65441 cri.go:89] found id: ""
	I0804 00:19:54.887815   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:19:54.887877   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:54.892740   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:19:54.892801   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:19:54.943044   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:54.943082   65441 cri.go:89] found id: ""
	I0804 00:19:54.943092   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:19:54.943164   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:54.947699   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:19:54.947765   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:19:54.997280   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:54.997302   65441 cri.go:89] found id: ""
	I0804 00:19:54.997311   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:19:54.997380   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.005574   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:19:55.005642   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:19:55.066824   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:55.066845   65441 cri.go:89] found id: ""
	I0804 00:19:55.066852   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:19:55.066906   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.071713   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:19:55.071779   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:19:55.116381   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:55.116406   65441 cri.go:89] found id: ""
	I0804 00:19:55.116414   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:19:55.116468   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.121174   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:19:55.121237   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:19:55.168300   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:55.168323   65441 cri.go:89] found id: ""
	I0804 00:19:55.168331   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:19:55.168381   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.173450   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:19:55.173509   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:19:55.218999   65441 cri.go:89] found id: ""
	I0804 00:19:55.219030   65441 logs.go:276] 0 containers: []
	W0804 00:19:55.219041   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:19:55.219048   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:19:55.219115   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:19:55.263696   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:55.263723   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:55.263727   65441 cri.go:89] found id: ""
	I0804 00:19:55.263734   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:19:55.263789   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.269001   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.277864   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:19:55.277899   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:55.323692   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:19:55.323729   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:55.364971   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:19:55.365005   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:19:55.871942   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:19:55.871983   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:19:55.929828   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:19:55.929869   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:19:55.987389   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:19:55.987425   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:56.041330   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:19:56.041381   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:56.082524   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:19:56.082556   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:56.122545   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:19:56.122572   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:56.178249   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:19:56.178288   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:56.219273   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:19:56.219300   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:19:56.235345   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:19:56.235389   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:19:56.370660   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:19:56.370692   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:57.600248   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:00.100920   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:58.936934   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:19:58.953624   65441 api_server.go:72] duration metric: took 4m14.22488371s to wait for apiserver process to appear ...
	I0804 00:19:58.953655   65441 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:19:58.953700   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:19:58.953764   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:19:58.997408   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:58.997434   65441 cri.go:89] found id: ""
	I0804 00:19:58.997443   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:19:58.997492   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.004398   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:19:59.004466   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:19:59.041483   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:59.041510   65441 cri.go:89] found id: ""
	I0804 00:19:59.041518   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:19:59.041568   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.045754   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:19:59.045825   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:19:59.081738   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:59.081756   65441 cri.go:89] found id: ""
	I0804 00:19:59.081764   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:19:59.081809   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.086297   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:19:59.086348   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:19:59.124421   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:59.124440   65441 cri.go:89] found id: ""
	I0804 00:19:59.124447   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:19:59.124494   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.128612   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:19:59.128677   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:19:59.165702   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:59.165728   65441 cri.go:89] found id: ""
	I0804 00:19:59.165737   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:19:59.165791   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.170016   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:19:59.170103   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:19:59.205275   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:59.205299   65441 cri.go:89] found id: ""
	I0804 00:19:59.205307   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:19:59.205377   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.209637   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:19:59.209699   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:19:59.244254   65441 cri.go:89] found id: ""
	I0804 00:19:59.244281   65441 logs.go:276] 0 containers: []
	W0804 00:19:59.244290   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:19:59.244295   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:19:59.244343   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:19:59.281850   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:59.281876   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:59.281880   65441 cri.go:89] found id: ""
	I0804 00:19:59.281887   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:19:59.281935   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.286423   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.291108   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:19:59.291134   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:59.340778   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:19:59.340808   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:59.379258   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:19:59.379288   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:59.418902   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:19:59.418932   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:19:59.875668   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:19:59.875708   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:19:59.932947   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:19:59.932980   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:59.980190   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:19:59.980224   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:00.024331   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:20:00.024359   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:00.064676   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:20:00.064701   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:00.117684   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:20:00.117717   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:00.153654   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:20:00.153683   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:00.200840   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:00.200869   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:00.214380   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:00.214410   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:02.101240   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:04.600064   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:02.832546   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:20:02.837684   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 200:
	ok
	I0804 00:20:02.838736   65441 api_server.go:141] control plane version: v1.30.3
	I0804 00:20:02.838763   65441 api_server.go:131] duration metric: took 3.885096913s to wait for apiserver health ...
	I0804 00:20:02.838773   65441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:02.838798   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:02.838856   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:02.878530   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:20:02.878556   65441 cri.go:89] found id: ""
	I0804 00:20:02.878565   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:20:02.878628   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.883263   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:02.883338   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:02.921989   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:20:02.922009   65441 cri.go:89] found id: ""
	I0804 00:20:02.922017   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:20:02.922062   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.928690   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:02.928767   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:02.967469   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:20:02.967490   65441 cri.go:89] found id: ""
	I0804 00:20:02.967498   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:20:02.967544   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.972155   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:02.972217   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:03.011875   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:03.011900   65441 cri.go:89] found id: ""
	I0804 00:20:03.011910   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:20:03.011966   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.016326   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:03.016395   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:03.057114   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:03.057137   65441 cri.go:89] found id: ""
	I0804 00:20:03.057145   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:20:03.057206   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.061528   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:03.061592   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:03.101778   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:03.101832   65441 cri.go:89] found id: ""
	I0804 00:20:03.101842   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:20:03.101902   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.106292   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:03.106368   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:03.146453   65441 cri.go:89] found id: ""
	I0804 00:20:03.146484   65441 logs.go:276] 0 containers: []
	W0804 00:20:03.146496   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:03.146504   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:03.146569   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:03.185861   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:03.185884   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:20:03.185887   65441 cri.go:89] found id: ""
	I0804 00:20:03.185894   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:20:03.185941   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.190490   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.194727   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:03.194750   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:03.308015   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:20:03.308052   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:20:03.358699   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:20:03.358732   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:20:03.410398   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:20:03.410430   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:20:03.450651   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:03.450685   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:03.859092   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:20:03.859145   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:03.905500   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:20:03.905529   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:03.951014   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:03.951047   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:04.003275   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:04.003311   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:04.017574   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:20:04.017608   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:20:04.054252   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:20:04.054283   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:04.094524   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:20:04.094558   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:04.131163   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:20:04.131192   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:06.691154   65441 system_pods.go:59] 8 kube-system pods found
	I0804 00:20:06.691193   65441 system_pods.go:61] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running
	I0804 00:20:06.691199   65441 system_pods.go:61] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running
	I0804 00:20:06.691203   65441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running
	I0804 00:20:06.691209   65441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running
	I0804 00:20:06.691213   65441 system_pods.go:61] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:20:06.691218   65441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running
	I0804 00:20:06.691226   65441 system_pods.go:61] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:06.691232   65441 system_pods.go:61] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:20:06.691244   65441 system_pods.go:74] duration metric: took 3.852463199s to wait for pod list to return data ...
	I0804 00:20:06.691257   65441 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:06.693724   65441 default_sa.go:45] found service account: "default"
	I0804 00:20:06.693755   65441 default_sa.go:55] duration metric: took 2.486182ms for default service account to be created ...
	I0804 00:20:06.693767   65441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:06.698925   65441 system_pods.go:86] 8 kube-system pods found
	I0804 00:20:06.698950   65441 system_pods.go:89] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running
	I0804 00:20:06.698956   65441 system_pods.go:89] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running
	I0804 00:20:06.698962   65441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running
	I0804 00:20:06.698968   65441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running
	I0804 00:20:06.698972   65441 system_pods.go:89] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:20:06.698976   65441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running
	I0804 00:20:06.698983   65441 system_pods.go:89] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:06.698990   65441 system_pods.go:89] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:20:06.698997   65441 system_pods.go:126] duration metric: took 5.224971ms to wait for k8s-apps to be running ...
	I0804 00:20:06.699003   65441 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:06.699047   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:06.714188   65441 system_svc.go:56] duration metric: took 15.17801ms WaitForService to wait for kubelet
	I0804 00:20:06.714213   65441 kubeadm.go:582] duration metric: took 4m21.985480612s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:06.714232   65441 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:06.716717   65441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:06.716743   65441 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:06.716757   65441 node_conditions.go:105] duration metric: took 2.521245ms to run NodePressure ...
	I0804 00:20:06.716771   65441 start.go:241] waiting for startup goroutines ...
	I0804 00:20:06.716780   65441 start.go:246] waiting for cluster config update ...
	I0804 00:20:06.716796   65441 start.go:255] writing updated cluster config ...
	I0804 00:20:06.717156   65441 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:06.765983   65441 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:20:06.768482   65441 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-969068" cluster and "default" namespace by default
	I0804 00:20:06.600233   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:08.603829   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:11.852948   65087 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.247618249s)
	I0804 00:20:11.853025   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:11.870882   65087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:20:11.882005   65087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:20:11.892505   65087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:20:11.892527   65087 kubeadm.go:157] found existing configuration files:
	
	I0804 00:20:11.892570   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:20:11.902005   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:20:11.902061   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:20:11.911585   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:20:11.921837   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:20:11.921911   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:20:11.101091   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:13.607073   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:14.600605   64502 pod_ready.go:81] duration metric: took 4m0.007136508s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	E0804 00:20:14.600629   64502 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0804 00:20:14.600637   64502 pod_ready.go:38] duration metric: took 4m5.120472791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:14.600651   64502 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:20:14.600675   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:14.600717   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:14.669699   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:14.669724   64502 cri.go:89] found id: ""
	I0804 00:20:14.669733   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:14.669789   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.674907   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:14.674978   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:14.720830   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:14.720867   64502 cri.go:89] found id: ""
	I0804 00:20:14.720877   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:14.720937   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.726667   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:14.726729   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:14.778216   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:14.778247   64502 cri.go:89] found id: ""
	I0804 00:20:14.778256   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:14.778321   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.785349   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:14.785433   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:14.836381   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:14.836408   64502 cri.go:89] found id: ""
	I0804 00:20:14.836416   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:14.836475   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.841662   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:14.841752   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:14.884803   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:14.884827   64502 cri.go:89] found id: ""
	I0804 00:20:14.884836   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:14.884904   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.890625   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:14.890696   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:14.942713   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:14.942732   64502 cri.go:89] found id: ""
	I0804 00:20:14.942739   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:14.942800   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.948335   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:14.948391   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:14.994869   64502 cri.go:89] found id: ""
	I0804 00:20:14.994900   64502 logs.go:276] 0 containers: []
	W0804 00:20:14.994910   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:14.994917   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:14.994985   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:15.034528   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:15.034557   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:15.034564   64502 cri.go:89] found id: ""
	I0804 00:20:15.034574   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:15.034633   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:15.039335   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:15.044600   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:15.044625   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:15.091365   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:15.091398   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:15.144896   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:15.144924   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:15.675849   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:15.675901   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:15.691640   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:15.691699   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:11.931844   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:20:11.941369   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:20:11.941430   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:20:11.951279   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:20:11.961201   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:20:11.961275   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:20:11.972150   65087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:20:12.024567   65087 kubeadm.go:310] W0804 00:20:12.001791    2996 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0804 00:20:12.025287   65087 kubeadm.go:310] W0804 00:20:12.002530    2996 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0804 00:20:12.154034   65087 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:20:20.358593   65087 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0804 00:20:20.358649   65087 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:20:20.358721   65087 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:20:20.358834   65087 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:20:20.358953   65087 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 00:20:20.359013   65087 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:20:20.360498   65087 out.go:204]   - Generating certificates and keys ...
	I0804 00:20:20.360590   65087 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:20:20.360692   65087 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:20:20.360767   65087 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:20:20.360821   65087 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:20:20.360915   65087 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:20:20.360971   65087 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:20:20.361042   65087 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:20:20.361124   65087 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:20:20.361228   65087 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:20:20.361307   65087 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:20:20.361342   65087 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:20:20.361436   65087 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:20:20.361523   65087 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:20:20.361592   65087 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 00:20:20.361642   65087 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:20:20.361698   65087 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:20:20.361746   65087 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:20:20.361815   65087 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:20:20.361881   65087 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:20:20.363214   65087 out.go:204]   - Booting up control plane ...
	I0804 00:20:20.363312   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:20:20.363381   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:20:20.363450   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:20:20.363541   65087 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:20:20.363628   65087 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:20:20.363678   65087 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:20:20.363790   65087 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 00:20:20.363889   65087 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 00:20:20.363960   65087 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.009132208s
	I0804 00:20:20.364044   65087 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 00:20:20.364094   65087 kubeadm.go:310] [api-check] The API server is healthy after 4.501833932s
	I0804 00:20:20.364201   65087 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 00:20:20.364321   65087 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 00:20:20.364397   65087 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 00:20:20.364585   65087 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-118016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 00:20:20.364634   65087 kubeadm.go:310] [bootstrap-token] Using token: bbnfwa.jorg7huedw5cbtk2
	I0804 00:20:20.366569   65087 out.go:204]   - Configuring RBAC rules ...
	I0804 00:20:20.366705   65087 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 00:20:20.366823   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 00:20:20.366979   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 00:20:20.367160   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 00:20:20.367275   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 00:20:20.367352   65087 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 00:20:20.367447   65087 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 00:20:20.367510   65087 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 00:20:20.367574   65087 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 00:20:20.367580   65087 kubeadm.go:310] 
	I0804 00:20:20.367629   65087 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 00:20:20.367635   65087 kubeadm.go:310] 
	I0804 00:20:20.367697   65087 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 00:20:20.367703   65087 kubeadm.go:310] 
	I0804 00:20:20.367724   65087 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 00:20:20.367784   65087 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 00:20:20.367828   65087 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 00:20:20.367834   65087 kubeadm.go:310] 
	I0804 00:20:20.367886   65087 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 00:20:20.367903   65087 kubeadm.go:310] 
	I0804 00:20:20.367971   65087 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 00:20:20.367981   65087 kubeadm.go:310] 
	I0804 00:20:20.368050   65087 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 00:20:20.368125   65087 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 00:20:20.368187   65087 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 00:20:20.368193   65087 kubeadm.go:310] 
	I0804 00:20:20.368262   65087 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 00:20:20.368353   65087 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 00:20:20.368367   65087 kubeadm.go:310] 
	I0804 00:20:20.368480   65087 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bbnfwa.jorg7huedw5cbtk2 \
	I0804 00:20:20.368588   65087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0804 00:20:20.368614   65087 kubeadm.go:310] 	--control-plane 
	I0804 00:20:20.368621   65087 kubeadm.go:310] 
	I0804 00:20:20.368705   65087 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 00:20:20.368712   65087 kubeadm.go:310] 
	I0804 00:20:20.368810   65087 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bbnfwa.jorg7huedw5cbtk2 \
	I0804 00:20:20.368933   65087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0804 00:20:20.368947   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:20:20.368957   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:20:20.370303   65087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:20:15.859131   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:15.859169   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:15.917686   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:15.917726   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:15.964285   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:15.964328   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:16.019646   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:16.019679   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:16.069379   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:16.069416   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:16.129752   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:16.129842   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:16.177015   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:16.177043   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:16.220526   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:16.220560   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:18.771509   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:20:18.793252   64502 api_server.go:72] duration metric: took 4m15.042389156s to wait for apiserver process to appear ...
	I0804 00:20:18.793291   64502 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:20:18.793334   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:18.793415   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:18.839339   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:18.839363   64502 cri.go:89] found id: ""
	I0804 00:20:18.839372   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:18.839432   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.843932   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:18.844005   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:18.894398   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:18.894422   64502 cri.go:89] found id: ""
	I0804 00:20:18.894432   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:18.894491   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.899596   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:18.899664   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:18.947077   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:18.947106   64502 cri.go:89] found id: ""
	I0804 00:20:18.947114   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:18.947168   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.952349   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:18.952431   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:18.999336   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:18.999361   64502 cri.go:89] found id: ""
	I0804 00:20:18.999377   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:18.999441   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.005419   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:19.005502   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:19.061388   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:19.061413   64502 cri.go:89] found id: ""
	I0804 00:20:19.061422   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:19.061476   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.066071   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:19.066139   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:19.111849   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:19.111872   64502 cri.go:89] found id: ""
	I0804 00:20:19.111879   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:19.111929   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.116272   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:19.116323   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:19.157387   64502 cri.go:89] found id: ""
	I0804 00:20:19.157414   64502 logs.go:276] 0 containers: []
	W0804 00:20:19.157423   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:19.157431   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:19.157493   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:19.199627   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:19.199654   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:19.199660   64502 cri.go:89] found id: ""
	I0804 00:20:19.199669   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:19.199727   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.204317   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.208707   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:19.208729   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:19.261684   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:19.261717   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:19.309861   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:19.309890   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:19.349376   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:19.349403   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:19.388561   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:19.388590   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:19.466119   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:19.466163   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:19.515539   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:19.515575   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:19.561529   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:19.561556   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:19.626188   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:19.626219   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:19.640348   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:19.640372   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:19.772397   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:19.772439   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:19.827217   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:19.827260   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:20.306543   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:20.306589   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:20.371388   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:20:20.384738   65087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:20:20.404547   65087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:20:20.404607   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:20.404659   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-118016 minikube.k8s.io/updated_at=2024_08_04T00_20_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=no-preload-118016 minikube.k8s.io/primary=true
	I0804 00:20:20.602476   65087 ops.go:34] apiserver oom_adj: -16
	I0804 00:20:20.602551   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:21.103011   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:21.602888   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:22.102779   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:22.603282   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:23.103337   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:23.603522   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.103510   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.603474   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.689895   65087 kubeadm.go:1113] duration metric: took 4.285337247s to wait for elevateKubeSystemPrivileges
	I0804 00:20:24.689931   65087 kubeadm.go:394] duration metric: took 5m0.881315877s to StartCluster
	I0804 00:20:24.689947   65087 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:24.690018   65087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:20:24.691559   65087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:24.691784   65087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:20:24.691848   65087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:20:24.691963   65087 addons.go:69] Setting storage-provisioner=true in profile "no-preload-118016"
	I0804 00:20:24.691977   65087 addons.go:69] Setting default-storageclass=true in profile "no-preload-118016"
	I0804 00:20:24.691999   65087 addons.go:234] Setting addon storage-provisioner=true in "no-preload-118016"
	I0804 00:20:24.692001   65087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-118016"
	I0804 00:20:24.692001   65087 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:20:24.692018   65087 addons.go:69] Setting metrics-server=true in profile "no-preload-118016"
	W0804 00:20:24.692007   65087 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:20:24.692068   65087 addons.go:234] Setting addon metrics-server=true in "no-preload-118016"
	I0804 00:20:24.692086   65087 host.go:66] Checking if "no-preload-118016" exists ...
	W0804 00:20:24.692099   65087 addons.go:243] addon metrics-server should already be in state true
	I0804 00:20:24.692142   65087 host.go:66] Checking if "no-preload-118016" exists ...
	I0804 00:20:24.692440   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692464   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.692494   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692441   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692517   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.692566   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.693590   65087 out.go:177] * Verifying Kubernetes components...
	I0804 00:20:24.695139   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:20:24.708841   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0804 00:20:24.709324   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.709911   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.709937   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.710305   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.710594   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.712827   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0804 00:20:24.712894   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0804 00:20:24.713349   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.713375   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.713884   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.713899   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.713923   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.713942   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.714211   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.714264   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.714421   65087 addons.go:234] Setting addon default-storageclass=true in "no-preload-118016"
	W0804 00:20:24.714440   65087 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:20:24.714468   65087 host.go:66] Checking if "no-preload-118016" exists ...
	I0804 00:20:24.714605   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.714623   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.714801   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.714846   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.714993   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.715014   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.730476   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0804 00:20:24.730811   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36995
	I0804 00:20:24.730912   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.731145   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.731470   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.731486   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.731733   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.731748   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.731808   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.732034   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.732123   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.732294   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.733677   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0804 00:20:24.734185   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.734257   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.734306   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.734689   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.734709   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.735090   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.735618   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.735643   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.736977   65087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:20:24.736979   65087 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:20:22.853589   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:20:22.859439   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0804 00:20:22.860503   64502 api_server.go:141] control plane version: v1.30.3
	I0804 00:20:22.860521   64502 api_server.go:131] duration metric: took 4.067223392s to wait for apiserver health ...
	I0804 00:20:22.860528   64502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:22.860550   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:22.860598   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:22.901174   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:22.901193   64502 cri.go:89] found id: ""
	I0804 00:20:22.901200   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:22.901246   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.905319   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:22.905404   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:22.948354   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:22.948378   64502 cri.go:89] found id: ""
	I0804 00:20:22.948387   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:22.948438   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.952776   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:22.952863   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:22.989339   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:22.989376   64502 cri.go:89] found id: ""
	I0804 00:20:22.989385   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:22.989443   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.993833   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:22.993909   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:23.035367   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:23.035385   64502 cri.go:89] found id: ""
	I0804 00:20:23.035392   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:23.035434   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.040184   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:23.040259   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:23.078508   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:23.078529   64502 cri.go:89] found id: ""
	I0804 00:20:23.078538   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:23.078601   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.082907   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:23.082969   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:23.120846   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:23.120870   64502 cri.go:89] found id: ""
	I0804 00:20:23.120880   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:23.120943   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.125641   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:23.125702   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:23.172188   64502 cri.go:89] found id: ""
	I0804 00:20:23.172212   64502 logs.go:276] 0 containers: []
	W0804 00:20:23.172224   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:23.172232   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:23.172297   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:23.218188   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:23.218207   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:23.218211   64502 cri.go:89] found id: ""
	I0804 00:20:23.218217   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:23.218268   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.222562   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.226965   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:23.226989   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:23.269384   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:23.269414   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:23.309148   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:23.309178   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:23.356908   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:23.356936   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:23.395352   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:23.395381   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:23.450901   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:23.450925   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:23.488908   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:23.488945   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:23.551780   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:23.551808   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:23.975030   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:23.975070   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:24.035464   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:24.035497   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:24.053118   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:24.053148   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:24.197157   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:24.197189   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:24.254049   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:24.254083   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:24.738735   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:20:24.738757   65087 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:20:24.738785   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.738836   65087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:20:24.738847   65087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:20:24.738860   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.742131   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742459   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742539   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.742569   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742690   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.742968   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.743009   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.743254   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.743142   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.743174   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.743387   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.743447   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.743590   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.743720   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.754983   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0804 00:20:24.755436   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.755877   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.755901   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.756229   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.756454   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.758285   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.758520   65087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:20:24.758537   65087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:20:24.758555   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.761268   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.761715   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.761739   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.762001   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.762211   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.762402   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.762593   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.942323   65087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:20:24.971293   65087 node_ready.go:35] waiting up to 6m0s for node "no-preload-118016" to be "Ready" ...
	I0804 00:20:24.991406   65087 node_ready.go:49] node "no-preload-118016" has status "Ready":"True"
	I0804 00:20:24.991428   65087 node_ready.go:38] duration metric: took 20.101499ms for node "no-preload-118016" to be "Ready" ...
	I0804 00:20:24.991436   65087 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:25.004484   65087 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:25.069407   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:20:25.069437   65087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:20:25.093645   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:20:25.178590   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:20:25.178615   65087 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:20:25.246634   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:20:25.272880   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:20:25.272916   65087 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:20:25.368517   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:20:25.442382   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.442406   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.442668   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:25.442711   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.442717   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.442726   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.442732   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.444425   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.444456   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.444463   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:25.451275   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.451298   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.451605   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.451620   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.451617   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218075   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.218105   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.218391   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.218416   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.218427   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.218435   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.218440   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218702   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218764   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.218786   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.671629   65087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.303057537s)
	I0804 00:20:26.671683   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.671702   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.672010   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.672031   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.672041   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.672049   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.672327   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.672363   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.672378   65087 addons.go:475] Verifying addon metrics-server=true in "no-preload-118016"
	I0804 00:20:26.674374   65087 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0804 00:20:26.803868   64502 system_pods.go:59] 8 kube-system pods found
	I0804 00:20:26.803909   64502 system_pods.go:61] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running
	I0804 00:20:26.803917   64502 system_pods.go:61] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running
	I0804 00:20:26.803923   64502 system_pods.go:61] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running
	I0804 00:20:26.803928   64502 system_pods.go:61] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running
	I0804 00:20:26.803934   64502 system_pods.go:61] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:20:26.803940   64502 system_pods.go:61] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running
	I0804 00:20:26.803948   64502 system_pods.go:61] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:26.803957   64502 system_pods.go:61] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:20:26.803966   64502 system_pods.go:74] duration metric: took 3.943432992s to wait for pod list to return data ...
	I0804 00:20:26.803978   64502 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:26.808760   64502 default_sa.go:45] found service account: "default"
	I0804 00:20:26.808786   64502 default_sa.go:55] duration metric: took 4.797226ms for default service account to be created ...
	I0804 00:20:26.808796   64502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:26.814721   64502 system_pods.go:86] 8 kube-system pods found
	I0804 00:20:26.814753   64502 system_pods.go:89] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running
	I0804 00:20:26.814761   64502 system_pods.go:89] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running
	I0804 00:20:26.814768   64502 system_pods.go:89] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running
	I0804 00:20:26.814774   64502 system_pods.go:89] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running
	I0804 00:20:26.814780   64502 system_pods.go:89] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:20:26.814787   64502 system_pods.go:89] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running
	I0804 00:20:26.814798   64502 system_pods.go:89] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:26.814807   64502 system_pods.go:89] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:20:26.814819   64502 system_pods.go:126] duration metric: took 6.01558ms to wait for k8s-apps to be running ...
	I0804 00:20:26.814828   64502 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:26.814894   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:26.837462   64502 system_svc.go:56] duration metric: took 22.624089ms WaitForService to wait for kubelet
	I0804 00:20:26.837494   64502 kubeadm.go:582] duration metric: took 4m23.086636256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:26.837522   64502 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:26.841517   64502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:26.841548   64502 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:26.841563   64502 node_conditions.go:105] duration metric: took 4.034693ms to run NodePressure ...
	I0804 00:20:26.841576   64502 start.go:241] waiting for startup goroutines ...
	I0804 00:20:26.841586   64502 start.go:246] waiting for cluster config update ...
	I0804 00:20:26.841600   64502 start.go:255] writing updated cluster config ...
	I0804 00:20:26.841939   64502 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:26.908142   64502 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:20:26.910191   64502 out.go:177] * Done! kubectl is now configured to use "embed-certs-877598" cluster and "default" namespace by default
	I0804 00:20:26.675679   65087 addons.go:510] duration metric: took 1.98382947s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0804 00:20:27.012226   65087 pod_ready.go:102] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:29.511886   65087 pod_ready.go:102] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:32.011000   65087 pod_ready.go:92] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:32.011021   65087 pod_ready.go:81] duration metric: took 7.006508451s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:32.011031   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.518235   65087 pod_ready.go:92] pod "kube-apiserver-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.518260   65087 pod_ready.go:81] duration metric: took 1.507219524s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.518270   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.522894   65087 pod_ready.go:92] pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.522916   65087 pod_ready.go:81] duration metric: took 4.639763ms for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.522928   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4jqng" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.527271   65087 pod_ready.go:92] pod "kube-proxy-4jqng" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.527291   65087 pod_ready.go:81] duration metric: took 4.353851ms for pod "kube-proxy-4jqng" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.527303   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.531405   65087 pod_ready.go:92] pod "kube-scheduler-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.531424   65087 pod_ready.go:81] duration metric: took 4.113418ms for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.531433   65087 pod_ready.go:38] duration metric: took 8.539987559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:33.531449   65087 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:20:33.531505   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:20:33.546783   65087 api_server.go:72] duration metric: took 8.854972636s to wait for apiserver process to appear ...
	I0804 00:20:33.546813   65087 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:20:33.546832   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:20:33.551131   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0804 00:20:33.552092   65087 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:20:33.552112   65087 api_server.go:131] duration metric: took 5.292367ms to wait for apiserver health ...
	I0804 00:20:33.552119   65087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:33.557965   65087 system_pods.go:59] 9 kube-system pods found
	I0804 00:20:33.557987   65087 system_pods.go:61] "coredns-6f6b679f8f-gg97s" [28bfbbe9-5051-4674-8b43-f07bfdbc6916] Running
	I0804 00:20:33.557995   65087 system_pods.go:61] "coredns-6f6b679f8f-lj494" [74baae1c-e4c4-4125-aa9d-aeaac74a6ecd] Running
	I0804 00:20:33.558000   65087 system_pods.go:61] "etcd-no-preload-118016" [19ff6386-b0c0-41f7-89fa-fd62e8698b05] Running
	I0804 00:20:33.558005   65087 system_pods.go:61] "kube-apiserver-no-preload-118016" [d791bfcb-00d1-47b8-a9c2-ac8e68af4062] Running
	I0804 00:20:33.558009   65087 system_pods.go:61] "kube-controller-manager-no-preload-118016" [cef9e6fa-7a9d-4d84-8693-216d2eeab428] Running
	I0804 00:20:33.558014   65087 system_pods.go:61] "kube-proxy-4jqng" [c254599f-e58d-4d0a-81c9-1c98c0341f26] Running
	I0804 00:20:33.558018   65087 system_pods.go:61] "kube-scheduler-no-preload-118016" [0deea66f-2336-4371-9492-5af84f3f0fe8] Running
	I0804 00:20:33.558026   65087 system_pods.go:61] "metrics-server-6867b74b74-9gw27" [2f3cdf21-9e68-49b9-a6e0-927465738f23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:33.558035   65087 system_pods.go:61] "storage-provisioner" [07fdb5fa-a2e9-4d3d-8149-25720c320d51] Running
	I0804 00:20:33.558045   65087 system_pods.go:74] duration metric: took 5.921154ms to wait for pod list to return data ...
	I0804 00:20:33.558057   65087 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:33.608139   65087 default_sa.go:45] found service account: "default"
	I0804 00:20:33.608164   65087 default_sa.go:55] duration metric: took 50.097877ms for default service account to be created ...
	I0804 00:20:33.608174   65087 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:33.811878   65087 system_pods.go:86] 9 kube-system pods found
	I0804 00:20:33.811906   65087 system_pods.go:89] "coredns-6f6b679f8f-gg97s" [28bfbbe9-5051-4674-8b43-f07bfdbc6916] Running
	I0804 00:20:33.811912   65087 system_pods.go:89] "coredns-6f6b679f8f-lj494" [74baae1c-e4c4-4125-aa9d-aeaac74a6ecd] Running
	I0804 00:20:33.811916   65087 system_pods.go:89] "etcd-no-preload-118016" [19ff6386-b0c0-41f7-89fa-fd62e8698b05] Running
	I0804 00:20:33.811920   65087 system_pods.go:89] "kube-apiserver-no-preload-118016" [d791bfcb-00d1-47b8-a9c2-ac8e68af4062] Running
	I0804 00:20:33.811925   65087 system_pods.go:89] "kube-controller-manager-no-preload-118016" [cef9e6fa-7a9d-4d84-8693-216d2eeab428] Running
	I0804 00:20:33.811928   65087 system_pods.go:89] "kube-proxy-4jqng" [c254599f-e58d-4d0a-81c9-1c98c0341f26] Running
	I0804 00:20:33.811932   65087 system_pods.go:89] "kube-scheduler-no-preload-118016" [0deea66f-2336-4371-9492-5af84f3f0fe8] Running
	I0804 00:20:33.811939   65087 system_pods.go:89] "metrics-server-6867b74b74-9gw27" [2f3cdf21-9e68-49b9-a6e0-927465738f23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:33.811943   65087 system_pods.go:89] "storage-provisioner" [07fdb5fa-a2e9-4d3d-8149-25720c320d51] Running
	I0804 00:20:33.811950   65087 system_pods.go:126] duration metric: took 203.770479ms to wait for k8s-apps to be running ...
	I0804 00:20:33.811957   65087 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:33.812000   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:33.827146   65087 system_svc.go:56] duration metric: took 15.17867ms WaitForService to wait for kubelet
	I0804 00:20:33.827176   65087 kubeadm.go:582] duration metric: took 9.135367695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:33.827199   65087 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:34.009032   65087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:34.009056   65087 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:34.009076   65087 node_conditions.go:105] duration metric: took 181.872031ms to run NodePressure ...
	I0804 00:20:34.009086   65087 start.go:241] waiting for startup goroutines ...
	I0804 00:20:34.009112   65087 start.go:246] waiting for cluster config update ...
	I0804 00:20:34.009128   65087 start.go:255] writing updated cluster config ...
	I0804 00:20:34.009423   65087 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:34.059796   65087 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0804 00:20:34.061903   65087 out.go:177] * Done! kubectl is now configured to use "no-preload-118016" cluster and "default" namespace by default
	I0804 00:21:00.664979   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:21:00.665100   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:21:00.666810   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:00.666904   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:00.667020   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:00.667150   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:00.667278   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:00.667370   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:00.670254   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:00.670337   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:00.670431   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:00.670537   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:00.670623   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:00.670721   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:00.670788   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:00.670883   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:00.670990   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:00.671079   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:00.671168   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:00.671217   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:00.671265   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:00.671359   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:00.671442   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:00.671529   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:00.671611   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:00.671756   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:00.671856   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:00.671888   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:00.671940   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:00.673410   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:00.673506   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:00.673573   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:00.673627   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:00.673692   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:00.673828   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:00.673876   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:00.673972   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674207   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674283   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674517   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674590   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674752   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674851   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675053   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675173   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675451   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675463   64758 kubeadm.go:310] 
	I0804 00:21:00.675511   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:21:00.675561   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:21:00.675571   64758 kubeadm.go:310] 
	I0804 00:21:00.675614   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:21:00.675656   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:21:00.675787   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:21:00.675797   64758 kubeadm.go:310] 
	I0804 00:21:00.675928   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:21:00.675970   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:21:00.676009   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:21:00.676026   64758 kubeadm.go:310] 
	I0804 00:21:00.676172   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:21:00.676278   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:21:00.676289   64758 kubeadm.go:310] 
	I0804 00:21:00.676393   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:21:00.676466   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:21:00.676532   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:21:00.676609   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:21:00.676632   64758 kubeadm.go:310] 
	W0804 00:21:00.676723   64758 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 00:21:00.676765   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:21:01.138781   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:21:01.154405   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:21:01.164426   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:21:01.164445   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:21:01.164496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:21:01.173853   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:21:01.173907   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:21:01.183634   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:21:01.193283   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:21:01.193342   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:21:01.202427   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.212186   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:21:01.212235   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.222754   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:21:01.232996   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:21:01.233059   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:21:01.243778   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:21:01.319895   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:01.319975   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:01.474907   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:01.475029   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:01.475119   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:01.683624   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:01.685482   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:01.685584   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:01.685691   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:01.685792   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:01.685880   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:01.685991   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:01.686067   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:01.686147   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:01.686285   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:01.686399   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:01.686513   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:01.686600   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:01.686670   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:01.985613   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:02.088377   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:02.336621   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:02.448654   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:02.470140   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:02.471390   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:02.471456   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:02.610840   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:02.612641   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:02.612744   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:02.627044   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:02.629019   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:02.630430   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:02.633022   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:42.635581   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:42.635740   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:42.636036   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:47.636656   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:47.636879   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:57.637900   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:57.638098   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:17.638425   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:17.638634   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637807   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:57.637988   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637996   64758 kubeadm.go:310] 
	I0804 00:22:57.638035   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:22:57.638079   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:22:57.638085   64758 kubeadm.go:310] 
	I0804 00:22:57.638118   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:22:57.638148   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:22:57.638288   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:22:57.638309   64758 kubeadm.go:310] 
	I0804 00:22:57.638426   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:22:57.638507   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:22:57.638619   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:22:57.638640   64758 kubeadm.go:310] 
	I0804 00:22:57.638829   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:22:57.638944   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:22:57.638959   64758 kubeadm.go:310] 
	I0804 00:22:57.639107   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:22:57.639191   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:22:57.639300   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:22:57.639399   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:22:57.639412   64758 kubeadm.go:310] 
	I0804 00:22:57.639782   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:22:57.639904   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:22:57.640012   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:22:57.640091   64758 kubeadm.go:394] duration metric: took 8m3.172057529s to StartCluster
	I0804 00:22:57.640138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:22:57.640202   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:22:57.684020   64758 cri.go:89] found id: ""
	I0804 00:22:57.684054   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.684064   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:22:57.684072   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:22:57.684134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:22:57.722756   64758 cri.go:89] found id: ""
	I0804 00:22:57.722780   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.722788   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:22:57.722793   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:22:57.722851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:22:57.760371   64758 cri.go:89] found id: ""
	I0804 00:22:57.760400   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.760412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:22:57.760419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:22:57.760476   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:22:57.796114   64758 cri.go:89] found id: ""
	I0804 00:22:57.796144   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.796155   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:22:57.796162   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:22:57.796211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:22:57.842148   64758 cri.go:89] found id: ""
	I0804 00:22:57.842179   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.842191   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:22:57.842198   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:22:57.842286   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:22:57.914193   64758 cri.go:89] found id: ""
	I0804 00:22:57.914218   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.914229   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:22:57.914236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:22:57.914290   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:22:57.965944   64758 cri.go:89] found id: ""
	I0804 00:22:57.965973   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.965984   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:22:57.965991   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:22:57.966063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:22:58.003016   64758 cri.go:89] found id: ""
	I0804 00:22:58.003044   64758 logs.go:276] 0 containers: []
	W0804 00:22:58.003055   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:22:58.003066   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:22:58.003093   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:22:58.017277   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:22:58.017304   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:22:58.094192   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:22:58.094214   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:22:58.094227   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:22:58.210901   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:22:58.210944   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:22:58.249283   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:22:58.249317   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 00:22:58.300998   64758 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0804 00:22:58.301054   64758 out.go:239] * 
	W0804 00:22:58.301115   64758 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.301137   64758 out.go:239] * 
	W0804 00:22:58.301978   64758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:22:58.305305   64758 out.go:177] 
	W0804 00:22:58.306722   64758 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.306816   64758 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0804 00:22:58.306848   64758 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0804 00:22:58.308372   64758 out.go:177] 
	
	
	==> CRI-O <==
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.301146328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722730980301122688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87302c67-2b02-45d3-9015-d96e05085b40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.301690916Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bf2607f-9210-4a07-b2c4-52fa5002c300 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.301765109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bf2607f-9210-4a07-b2c4-52fa5002c300 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.301799812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8bf2607f-9210-4a07-b2c4-52fa5002c300 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.337014260Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f1057b2-2bea-4be0-bea3-98714d808cbd name=/runtime.v1.RuntimeService/Version
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.337089391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f1057b2-2bea-4be0-bea3-98714d808cbd name=/runtime.v1.RuntimeService/Version
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.338505678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45015af7-5d89-468a-b721-da4853e9c7ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.338948292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722730980338921195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45015af7-5d89-468a-b721-da4853e9c7ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.339609419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b94f791f-da65-43cf-9a1a-2997fc58f79e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.339663181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b94f791f-da65-43cf-9a1a-2997fc58f79e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.339695621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b94f791f-da65-43cf-9a1a-2997fc58f79e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.374871566Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22f32743-2ec4-488c-baef-f14a54c1c114 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.374965874Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22f32743-2ec4-488c-baef-f14a54c1c114 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.376618870Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eca89a30-4db4-43e6-b4bb-006cc5a1c0fc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.377038186Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722730980377015773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eca89a30-4db4-43e6-b4bb-006cc5a1c0fc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.377591398Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a09337fc-9621-4efc-b274-be71a431e0ec name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.377663384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a09337fc-9621-4efc-b274-be71a431e0ec name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.377698591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a09337fc-9621-4efc-b274-be71a431e0ec name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.411748412Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efa7c567-04a5-4817-b401-1e369a74509a name=/runtime.v1.RuntimeService/Version
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.411836380Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efa7c567-04a5-4817-b401-1e369a74509a name=/runtime.v1.RuntimeService/Version
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.412974413Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5e18c2e-b694-49b4-bf74-f05f3a8479c3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.413417579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722730980413397511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5e18c2e-b694-49b4-bf74-f05f3a8479c3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.413974406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=320209f4-bc1a-4210-bc10-06c7f732cc58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.414029391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=320209f4-bc1a-4210-bc10-06c7f732cc58 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:23:00 old-k8s-version-576210 crio[653]: time="2024-08-04 00:23:00.414061110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=320209f4-bc1a-4210-bc10-06c7f732cc58 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 4 00:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050227] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041126] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.789171] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.600311] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.566673] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.215618] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.062656] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049621] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.191384] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.139006] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.271189] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +6.294398] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.066429] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.776417] systemd-fstab-generator[966]: Ignoring "noauto" option for root device
	[Aug 4 00:15] kauditd_printk_skb: 46 callbacks suppressed
	[Aug 4 00:19] systemd-fstab-generator[5026]: Ignoring "noauto" option for root device
	[Aug 4 00:21] systemd-fstab-generator[5298]: Ignoring "noauto" option for root device
	[  +0.071111] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:23:00 up 8 min,  0 users,  load average: 0.05, 0.13, 0.08
	Linux old-k8s-version-576210 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]:         /usr/local/go/src/sync/mutex.go:179 +0x4f
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]: context.WithDeadline(0x4f7fe00, 0xc000120018, 0xc1a3d05fc983d227, 0x82ce22dc6, 0x70c7020, 0x4f7fe40, 0xc000339b60, 0xc0006f8100)
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]:         /usr/local/go/src/context/context.go:455 +0x28c
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]: net.(*Dialer).DialContext(0xc000be02a0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009c1ad0, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]:         /usr/local/go/src/net/dial.go:376 +0x9dd
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bdf4e0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009c1ad0, 0x24, 0x60, 0x7f46f9505060, 0x118, ...)
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]: net/http.(*Transport).dial(0xc00087eb40, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009c1ad0, 0x24, 0x0, 0x10474e494e524157, 0x564553140a181202, ...)
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]: net/http.(*Transport).dialConn(0xc00087eb40, 0x4f7fe00, 0xc000120018, 0x0, 0xc0006c0240, 0x5, 0xc0009c1ad0, 0x24, 0x0, 0xc0006fe120, ...)
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]: net/http.(*Transport).dialConnFor(0xc00087eb40, 0xc00003dd90)
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]: created by net/http.(*Transport).queueForDial
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5480]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 04 00:22:57 old-k8s-version-576210 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 04 00:22:57 old-k8s-version-576210 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 04 00:22:57 old-k8s-version-576210 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 04 00:22:57 old-k8s-version-576210 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 04 00:22:57 old-k8s-version-576210 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5507]: I0804 00:22:57.938392    5507 server.go:416] Version: v1.20.0
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5507]: I0804 00:22:57.938962    5507 server.go:837] Client rotation is on, will bootstrap in background
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5507]: I0804 00:22:57.941096    5507 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5507]: I0804 00:22:57.942465    5507 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 04 00:22:57 old-k8s-version-576210 kubelet[5507]: W0804 00:22:57.942485    5507 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-576210 -n old-k8s-version-576210
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 2 (223.937582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-576210" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (770.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118016 -n no-preload-118016
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118016 -n no-preload-118016: exit status 3 (3.16744787s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:10:52.673699   64955 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host
	E0804 00:10:52.673728   64955 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-118016 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0804 00:10:58.007852   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-118016 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153279312s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-118016 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118016 -n no-preload-118016
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118016 -n no-preload-118016: exit status 3 (3.062606152s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:11:01.889723   65040 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host
	E0804 00:11:01.889745   65040 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.137:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-118016" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068: exit status 3 (3.16783203s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:11:43.105729   65316 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.132:22: connect: no route to host
	E0804 00:11:43.105748   65316 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.132:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-969068 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-969068 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153943311s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.132:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-969068 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068: exit status 3 (3.062053324s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:11:52.321759   65395 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.132:22: connect: no route to host
	E0804 00:11:52.321808   65395 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.132:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-969068" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-04 00:29:07.314710513 +0000 UTC m=+6083.256954561
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-969068 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-969068 logs -n 25: (2.197810438s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-551054                                 | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302198                           | kubernetes-upgrade-302198    | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-551054 sudo                            | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-551054                                 | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-877598            | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-705918                              | cert-expiration-705918       | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-423330 | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	|         | disable-driver-mounts-423330                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:09 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-118016             | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC | 04 Aug 24 00:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-576210        | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-969068  | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-877598                 | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-576210             | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-118016                  | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-969068       | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:11:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:11:52.361065   65441 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:11:52.361334   65441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:11:52.361345   65441 out.go:304] Setting ErrFile to fd 2...
	I0804 00:11:52.361349   65441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:11:52.361548   65441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:11:52.362087   65441 out.go:298] Setting JSON to false
	I0804 00:11:52.363002   65441 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6856,"bootTime":1722723456,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:11:52.363061   65441 start.go:139] virtualization: kvm guest
	I0804 00:11:52.365345   65441 out.go:177] * [default-k8s-diff-port-969068] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:11:52.367170   65441 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:11:52.367161   65441 notify.go:220] Checking for updates...
	I0804 00:11:52.369837   65441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:11:52.371134   65441 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:11:52.372226   65441 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:11:52.373445   65441 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:11:52.374802   65441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:11:52.376375   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:11:52.376787   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:11:52.376859   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:11:52.392495   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0804 00:11:52.392954   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:11:52.393477   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:11:52.393497   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:11:52.393883   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:11:52.394048   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:11:52.394313   65441 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:11:52.394606   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:11:52.394638   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:11:52.409194   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0804 00:11:52.409594   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:11:52.410032   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:11:52.410050   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:11:52.410358   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:11:52.410529   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:11:52.445480   65441 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:11:52.446679   65441 start.go:297] selected driver: kvm2
	I0804 00:11:52.446694   65441 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:11:52.446827   65441 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:11:52.447792   65441 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:11:52.447886   65441 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:11:52.462893   65441 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:11:52.463275   65441 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:11:52.463306   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:11:52.463316   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:11:52.463368   65441 start.go:340] cluster config:
	{Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:11:52.463486   65441 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:11:52.465374   65441 out.go:177] * Starting "default-k8s-diff-port-969068" primary control-plane node in "default-k8s-diff-port-969068" cluster
	I0804 00:11:52.466656   65441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:11:52.466698   65441 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:11:52.466710   65441 cache.go:56] Caching tarball of preloaded images
	I0804 00:11:52.466790   65441 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:11:52.466801   65441 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:11:52.466901   65441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:11:52.467100   65441 start.go:360] acquireMachinesLock for default-k8s-diff-port-969068: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:11:55.809602   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:11:58.881666   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:04.961665   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:08.033617   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:14.113634   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:17.185623   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:23.265618   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:26.337594   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:32.417583   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:35.489705   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:41.569654   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:44.641653   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:50.721640   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:53.793649   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:59.873643   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:02.945676   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:09.025652   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:12.097647   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:18.177740   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:21.249606   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:27.329637   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:30.401648   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:36.481588   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:39.553638   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:45.633633   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:48.705646   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:54.785636   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:57.857662   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:03.937643   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:07.009557   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:13.089694   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:16.161619   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:22.241650   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:25.313612   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:28.318586   64758 start.go:364] duration metric: took 4m16.324186239s to acquireMachinesLock for "old-k8s-version-576210"
	I0804 00:14:28.318635   64758 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:28.318646   64758 fix.go:54] fixHost starting: 
	I0804 00:14:28.319092   64758 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:28.319128   64758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:28.334850   64758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0804 00:14:28.335321   64758 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:28.335817   64758 main.go:141] libmachine: Using API Version  1
	I0804 00:14:28.335848   64758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:28.336204   64758 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:28.336435   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:28.336622   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetState
	I0804 00:14:28.338146   64758 fix.go:112] recreateIfNeeded on old-k8s-version-576210: state=Stopped err=<nil>
	I0804 00:14:28.338166   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	W0804 00:14:28.338322   64758 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:28.340640   64758 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-576210" ...
	I0804 00:14:28.315605   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:28.315642   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:14:28.316035   64502 buildroot.go:166] provisioning hostname "embed-certs-877598"
	I0804 00:14:28.316073   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:14:28.316325   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:14:28.318440   64502 machine.go:97] duration metric: took 4m37.42620041s to provisionDockerMachine
	I0804 00:14:28.318477   64502 fix.go:56] duration metric: took 4m37.448052873s for fixHost
	I0804 00:14:28.318485   64502 start.go:83] releasing machines lock for "embed-certs-877598", held for 4m37.44807127s
	W0804 00:14:28.318509   64502 start.go:714] error starting host: provision: host is not running
	W0804 00:14:28.318594   64502 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0804 00:14:28.318606   64502 start.go:729] Will try again in 5 seconds ...
	I0804 00:14:28.342217   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .Start
	I0804 00:14:28.342401   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring networks are active...
	I0804 00:14:28.343274   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network default is active
	I0804 00:14:28.343761   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network mk-old-k8s-version-576210 is active
	I0804 00:14:28.344268   64758 main.go:141] libmachine: (old-k8s-version-576210) Getting domain xml...
	I0804 00:14:28.345080   64758 main.go:141] libmachine: (old-k8s-version-576210) Creating domain...
	I0804 00:14:29.575420   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting to get IP...
	I0804 00:14:29.576307   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.576754   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.576842   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.576711   66003 retry.go:31] will retry after 272.821874ms: waiting for machine to come up
	I0804 00:14:29.851363   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.851951   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.851976   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.851895   66003 retry.go:31] will retry after 247.116514ms: waiting for machine to come up
	I0804 00:14:30.100479   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.100883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.100916   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.100833   66003 retry.go:31] will retry after 353.251065ms: waiting for machine to come up
	I0804 00:14:30.455526   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.455975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.456004   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.455933   66003 retry.go:31] will retry after 558.071575ms: waiting for machine to come up
	I0804 00:14:31.015539   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.015974   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.016000   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.015917   66003 retry.go:31] will retry after 514.757536ms: waiting for machine to come up
	I0804 00:14:31.532799   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.533232   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.533250   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.533186   66003 retry.go:31] will retry after 607.548546ms: waiting for machine to come up
	I0804 00:14:33.318807   64502 start.go:360] acquireMachinesLock for embed-certs-877598: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:14:32.142162   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:32.142658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:32.142693   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:32.142610   66003 retry.go:31] will retry after 897.977595ms: waiting for machine to come up
	I0804 00:14:33.042628   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:33.043002   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:33.043028   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:33.042966   66003 retry.go:31] will retry after 1.094117762s: waiting for machine to come up
	I0804 00:14:34.138946   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:34.139459   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:34.139485   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:34.139414   66003 retry.go:31] will retry after 1.435055372s: waiting for machine to come up
	I0804 00:14:35.576253   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:35.576603   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:35.576625   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:35.576547   66003 retry.go:31] will retry after 1.688006591s: waiting for machine to come up
	I0804 00:14:37.265928   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:37.266429   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:37.266456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:37.266371   66003 retry.go:31] will retry after 2.356818801s: waiting for machine to come up
	I0804 00:14:39.624408   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:39.624832   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:39.624863   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:39.624775   66003 retry.go:31] will retry after 2.41856098s: waiting for machine to come up
	I0804 00:14:46.442402   65087 start.go:364] duration metric: took 3m44.405576801s to acquireMachinesLock for "no-preload-118016"
	I0804 00:14:46.442459   65087 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:46.442469   65087 fix.go:54] fixHost starting: 
	I0804 00:14:46.442938   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:46.442975   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:46.459944   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0804 00:14:46.460375   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:46.460851   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:14:46.460871   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:46.461211   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:46.461402   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:14:46.461538   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:14:46.463097   65087 fix.go:112] recreateIfNeeded on no-preload-118016: state=Stopped err=<nil>
	I0804 00:14:46.463126   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	W0804 00:14:46.463282   65087 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:46.465711   65087 out.go:177] * Restarting existing kvm2 VM for "no-preload-118016" ...
	I0804 00:14:42.044498   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:42.044855   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:42.044882   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:42.044822   66003 retry.go:31] will retry after 3.111190148s: waiting for machine to come up
	I0804 00:14:45.158161   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.158688   64758 main.go:141] libmachine: (old-k8s-version-576210) Found IP for machine: 192.168.72.154
	I0804 00:14:45.158709   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserving static IP address...
	I0804 00:14:45.158719   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has current primary IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.159112   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.159138   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | skip adding static IP to network mk-old-k8s-version-576210 - found existing host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"}
	I0804 00:14:45.159151   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserved static IP address: 192.168.72.154
	I0804 00:14:45.159163   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting for SSH to be available...
	I0804 00:14:45.159172   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Getting to WaitForSSH function...
	I0804 00:14:45.161469   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161782   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.161812   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161936   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH client type: external
	I0804 00:14:45.161975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa (-rw-------)
	I0804 00:14:45.162015   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:14:45.162034   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | About to run SSH command:
	I0804 00:14:45.162044   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | exit 0
	I0804 00:14:45.281546   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | SSH cmd err, output: <nil>: 
	I0804 00:14:45.281859   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetConfigRaw
	I0804 00:14:45.282574   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.284998   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285386   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.285414   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285614   64758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/config.json ...
	I0804 00:14:45.285806   64758 machine.go:94] provisionDockerMachine start ...
	I0804 00:14:45.285823   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:45.286098   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.288285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288640   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.288668   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288753   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.288931   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289088   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289253   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.289426   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.289628   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.289640   64758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:14:45.386001   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:14:45.386036   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386325   64758 buildroot.go:166] provisioning hostname "old-k8s-version-576210"
	I0804 00:14:45.386348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386536   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.389316   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389718   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.389739   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389948   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.390122   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390285   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390415   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.390557   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.390758   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.390776   64758 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-576210 && echo "old-k8s-version-576210" | sudo tee /etc/hostname
	I0804 00:14:45.499644   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-576210
	
	I0804 00:14:45.499695   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.502583   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.502935   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.502959   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.503123   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.503318   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503456   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503570   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.503729   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.503898   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.503915   64758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-576210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-576210/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-576210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:14:45.606971   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:45.607003   64758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:14:45.607045   64758 buildroot.go:174] setting up certificates
	I0804 00:14:45.607053   64758 provision.go:84] configureAuth start
	I0804 00:14:45.607062   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.607327   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.610009   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610378   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.610407   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610545   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.612549   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.612876   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.612908   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.613071   64758 provision.go:143] copyHostCerts
	I0804 00:14:45.613134   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:14:45.613147   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:14:45.613231   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:14:45.613343   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:14:45.613368   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:14:45.613410   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:14:45.613491   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:14:45.613501   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:14:45.613535   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:14:45.613609   64758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-576210 san=[127.0.0.1 192.168.72.154 localhost minikube old-k8s-version-576210]
	I0804 00:14:45.794221   64758 provision.go:177] copyRemoteCerts
	I0804 00:14:45.794276   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:14:45.794299   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.796859   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797182   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.797225   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.797555   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.797687   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.797804   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:45.875704   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:14:45.903765   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0804 00:14:45.930101   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:14:45.955639   64758 provision.go:87] duration metric: took 348.556108ms to configureAuth
	I0804 00:14:45.955668   64758 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:14:45.955874   64758 config.go:182] Loaded profile config "old-k8s-version-576210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:14:45.955960   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.958487   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958835   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.958950   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958970   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.959193   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.959616   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.959789   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.959810   64758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:14:46.217683   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:14:46.217725   64758 machine.go:97] duration metric: took 931.901933ms to provisionDockerMachine
	I0804 00:14:46.217742   64758 start.go:293] postStartSetup for "old-k8s-version-576210" (driver="kvm2")
	I0804 00:14:46.217758   64758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:14:46.217787   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.218127   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:14:46.218151   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.220834   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221148   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.221170   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221342   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.221576   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.221733   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.221867   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.300102   64758 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:14:46.304434   64758 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:14:46.304464   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:14:46.304538   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:14:46.304631   64758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:14:46.304747   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:14:46.314378   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:46.339057   64758 start.go:296] duration metric: took 121.299069ms for postStartSetup
	I0804 00:14:46.339105   64758 fix.go:56] duration metric: took 18.020458894s for fixHost
	I0804 00:14:46.339129   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.341883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342258   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.342285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.342688   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342856   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342992   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.343161   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:46.343385   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:46.343400   64758 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:14:46.442247   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730486.414818212
	
	I0804 00:14:46.442275   64758 fix.go:216] guest clock: 1722730486.414818212
	I0804 00:14:46.442288   64758 fix.go:229] Guest: 2024-08-04 00:14:46.414818212 +0000 UTC Remote: 2024-08-04 00:14:46.339109981 +0000 UTC m=+274.490542023 (delta=75.708231ms)
	I0804 00:14:46.442313   64758 fix.go:200] guest clock delta is within tolerance: 75.708231ms
	I0804 00:14:46.442319   64758 start.go:83] releasing machines lock for "old-k8s-version-576210", held for 18.123699316s
	I0804 00:14:46.442347   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.442656   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:46.445456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.445865   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.445892   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.446069   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446577   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446743   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446816   64758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:14:46.446850   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.446965   64758 ssh_runner.go:195] Run: cat /version.json
	I0804 00:14:46.446987   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.449576   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449794   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449953   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.449983   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450178   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450265   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.450317   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450384   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450520   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450605   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450667   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450733   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.450780   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450910   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.534686   64758 ssh_runner.go:195] Run: systemctl --version
	I0804 00:14:46.554270   64758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:14:46.708220   64758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:14:46.714541   64758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:14:46.714607   64758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:14:46.731642   64758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:14:46.731668   64758 start.go:495] detecting cgroup driver to use...
	I0804 00:14:46.731739   64758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:14:46.748782   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:14:46.763556   64758 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:14:46.763640   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:14:46.778075   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:14:46.793133   64758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:14:46.466927   65087 main.go:141] libmachine: (no-preload-118016) Calling .Start
	I0804 00:14:46.467081   65087 main.go:141] libmachine: (no-preload-118016) Ensuring networks are active...
	I0804 00:14:46.467696   65087 main.go:141] libmachine: (no-preload-118016) Ensuring network default is active
	I0804 00:14:46.468023   65087 main.go:141] libmachine: (no-preload-118016) Ensuring network mk-no-preload-118016 is active
	I0804 00:14:46.468344   65087 main.go:141] libmachine: (no-preload-118016) Getting domain xml...
	I0804 00:14:46.468932   65087 main.go:141] libmachine: (no-preload-118016) Creating domain...
	I0804 00:14:46.918377   64758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:14:47.059683   64758 docker.go:233] disabling docker service ...
	I0804 00:14:47.059753   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:14:47.074819   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:14:47.092184   64758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:14:47.235274   64758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:14:47.357937   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:14:47.375273   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:14:47.395182   64758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0804 00:14:47.395236   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.407036   64758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:14:47.407092   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.418562   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.434481   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.447488   64758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:14:47.460242   64758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:14:47.471089   64758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:14:47.471143   64758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:14:47.486698   64758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:14:47.498754   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:47.630867   64758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:14:47.796598   64758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:14:47.796690   64758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:14:47.802302   64758 start.go:563] Will wait 60s for crictl version
	I0804 00:14:47.802364   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:47.806368   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:14:47.847588   64758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:14:47.847679   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.877936   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.908229   64758 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0804 00:14:47.909635   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:47.912658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913102   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:47.913130   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913438   64758 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0804 00:14:47.917910   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:47.931201   64758 kubeadm.go:883] updating cluster {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:14:47.931318   64758 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:14:47.931381   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:47.980001   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:47.980071   64758 ssh_runner.go:195] Run: which lz4
	I0804 00:14:47.984277   64758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:14:47.988781   64758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:14:47.988810   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0804 00:14:49.706968   64758 crio.go:462] duration metric: took 1.722721175s to copy over tarball
	I0804 00:14:49.707059   64758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:14:47.715321   65087 main.go:141] libmachine: (no-preload-118016) Waiting to get IP...
	I0804 00:14:47.716397   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:47.716853   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:47.716889   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:47.716820   66120 retry.go:31] will retry after 187.841432ms: waiting for machine to come up
	I0804 00:14:47.906481   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:47.906984   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:47.907018   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:47.906942   66120 retry.go:31] will retry after 389.569097ms: waiting for machine to come up
	I0804 00:14:48.298691   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:48.299997   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:48.300021   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:48.299947   66120 retry.go:31] will retry after 382.905254ms: waiting for machine to come up
	I0804 00:14:48.684628   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:48.685095   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:48.685127   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:48.685066   66120 retry.go:31] will retry after 526.267085ms: waiting for machine to come up
	I0804 00:14:49.213459   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:49.214180   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:49.214203   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:49.214142   66120 retry.go:31] will retry after 666.253139ms: waiting for machine to come up
	I0804 00:14:49.882141   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:49.882610   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:49.882639   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:49.882560   66120 retry.go:31] will retry after 776.560525ms: waiting for machine to come up
	I0804 00:14:50.660679   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:50.661149   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:50.661177   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:50.661105   66120 retry.go:31] will retry after 825.927722ms: waiting for machine to come up
	I0804 00:14:51.488562   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:51.488937   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:51.488964   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:51.488894   66120 retry.go:31] will retry after 1.210535859s: waiting for machine to come up
	I0804 00:14:52.511242   64758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.804147671s)
	I0804 00:14:52.511275   64758 crio.go:469] duration metric: took 2.804279705s to extract the tarball
	I0804 00:14:52.511285   64758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:14:52.553905   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:52.587405   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:52.587429   64758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:14:52.587496   64758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.587513   64758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.587550   64758 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.587551   64758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.587554   64758 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.587567   64758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.587570   64758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.587577   64758 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.589240   64758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.589239   64758 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.589247   64758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.589211   64758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.589287   64758 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589579   64758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.742969   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.766505   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.782813   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0804 00:14:52.788509   64758 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0804 00:14:52.788553   64758 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.788598   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.823108   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.829531   64758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0804 00:14:52.829577   64758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.829648   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.858209   64758 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0804 00:14:52.858238   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.858245   64758 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0804 00:14:52.858288   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.888665   64758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0804 00:14:52.888717   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.888748   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0804 00:14:52.888717   64758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.888794   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.918127   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.921386   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0804 00:14:52.929839   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.977866   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0804 00:14:52.977919   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.977960   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0804 00:14:52.994379   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.003198   64758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0804 00:14:53.003233   64758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.003273   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.056310   64758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0804 00:14:53.056338   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0804 00:14:53.056357   64758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.056403   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.062077   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.062119   64758 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0804 00:14:53.062161   64758 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.062206   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.064260   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.114709   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0804 00:14:53.114758   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.118375   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0804 00:14:53.147635   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0804 00:14:53.497155   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:53.647242   64758 cache_images.go:92] duration metric: took 1.059794593s to LoadCachedImages
	W0804 00:14:53.647353   64758 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0804 00:14:53.647370   64758 kubeadm.go:934] updating node { 192.168.72.154 8443 v1.20.0 crio true true} ...
	I0804 00:14:53.647507   64758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-576210 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:14:53.647586   64758 ssh_runner.go:195] Run: crio config
	I0804 00:14:53.710377   64758 cni.go:84] Creating CNI manager for ""
	I0804 00:14:53.710399   64758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:14:53.710411   64758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:14:53.710437   64758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.154 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-576210 NodeName:old-k8s-version-576210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0804 00:14:53.710583   64758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-576210"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:14:53.710661   64758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0804 00:14:53.721942   64758 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:14:53.722005   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:14:53.732623   64758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0804 00:14:53.749878   64758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:14:53.767147   64758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0804 00:14:53.785522   64758 ssh_runner.go:195] Run: grep 192.168.72.154	control-plane.minikube.internal$ /etc/hosts
	I0804 00:14:53.789438   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:53.802152   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:53.934508   64758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:14:53.952247   64758 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210 for IP: 192.168.72.154
	I0804 00:14:53.952280   64758 certs.go:194] generating shared ca certs ...
	I0804 00:14:53.952301   64758 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:53.952470   64758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:14:53.952523   64758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:14:53.952536   64758 certs.go:256] generating profile certs ...
	I0804 00:14:53.952658   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.key
	I0804 00:14:53.952730   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key.5357f842
	I0804 00:14:53.952783   64758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key
	I0804 00:14:53.952948   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:14:53.953000   64758 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:14:53.953013   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:14:53.953048   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:14:53.953084   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:14:53.953114   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:14:53.953191   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:53.954013   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:14:54.001446   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:14:54.029628   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:14:54.062713   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:14:54.090711   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0804 00:14:54.117970   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:14:54.163691   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:14:54.190151   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:14:54.219334   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:14:54.244677   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:14:54.269795   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:14:54.294949   64758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:14:54.312330   64758 ssh_runner.go:195] Run: openssl version
	I0804 00:14:54.318320   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:14:54.328932   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333686   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333737   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.341330   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:14:54.356008   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:14:54.368966   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373896   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373954   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.379770   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:14:54.390903   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:14:54.402637   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407296   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407362   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.413215   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:14:54.424473   64758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:14:54.429673   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:14:54.436038   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:14:54.442091   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:14:54.448507   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:14:54.455421   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:14:54.461969   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:14:54.468042   64758 kubeadm.go:392] StartCluster: {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:14:54.468151   64758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:14:54.468208   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.508109   64758 cri.go:89] found id: ""
	I0804 00:14:54.508183   64758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:14:54.518712   64758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:14:54.518736   64758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:14:54.518788   64758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:14:54.528545   64758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:14:54.529780   64758 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-576210" does not appear in /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:14:54.530411   64758 kubeconfig.go:62] /home/jenkins/minikube-integration/19364-9607/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-576210" cluster setting kubeconfig missing "old-k8s-version-576210" context setting]
	I0804 00:14:54.531316   64758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:54.550431   64758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:14:54.561047   64758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.154
	I0804 00:14:54.561086   64758 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:14:54.561108   64758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:14:54.561163   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.597213   64758 cri.go:89] found id: ""
	I0804 00:14:54.597282   64758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:14:54.612914   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:14:54.622533   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:14:54.622562   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:14:54.622613   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:14:54.632746   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:14:54.632812   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:14:54.642197   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:14:54.651204   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:14:54.651268   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:14:54.660496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.669448   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:14:54.669512   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.678773   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:14:54.687854   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:14:54.687902   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:14:54.697066   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:14:54.707036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:54.840553   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.551919   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.790500   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.898210   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.995621   64758 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:14:55.995711   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:56.496072   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:52.701200   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:52.701574   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:52.701598   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:52.701547   66120 retry.go:31] will retry after 1.518623613s: waiting for machine to come up
	I0804 00:14:54.221367   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:54.221886   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:54.221916   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:54.221835   66120 retry.go:31] will retry after 1.869121058s: waiting for machine to come up
	I0804 00:14:56.092101   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:56.092527   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:56.092550   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:56.092488   66120 retry.go:31] will retry after 2.071227436s: waiting for machine to come up
	I0804 00:14:56.995965   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.496285   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.995805   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.496549   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.996224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.496360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.996056   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:01.496435   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.166383   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:58.166760   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:58.166807   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:58.166729   66120 retry.go:31] will retry after 2.352991709s: waiting for machine to come up
	I0804 00:15:00.522153   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:00.522630   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:15:00.522657   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:15:00.522584   66120 retry.go:31] will retry after 3.326179831s: waiting for machine to come up
	I0804 00:15:05.170439   65441 start.go:364] duration metric: took 3m12.703297591s to acquireMachinesLock for "default-k8s-diff-port-969068"
	I0804 00:15:05.170512   65441 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:15:05.170520   65441 fix.go:54] fixHost starting: 
	I0804 00:15:05.170935   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:05.170974   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:05.188546   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0804 00:15:05.188997   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:05.189494   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:05.189518   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:05.189933   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:05.190132   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:05.190276   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:05.191653   65441 fix.go:112] recreateIfNeeded on default-k8s-diff-port-969068: state=Stopped err=<nil>
	I0804 00:15:05.191684   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	W0804 00:15:05.191834   65441 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:15:05.194275   65441 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-969068" ...
	I0804 00:15:01.996148   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.496756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.996430   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.496646   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.996707   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.496772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.995997   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.496651   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.996384   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.496403   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.850063   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.850518   65087 main.go:141] libmachine: (no-preload-118016) Found IP for machine: 192.168.61.137
	I0804 00:15:03.850544   65087 main.go:141] libmachine: (no-preload-118016) Reserving static IP address...
	I0804 00:15:03.850559   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has current primary IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.850970   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "no-preload-118016", mac: "52:54:00:be:41:20", ip: "192.168.61.137"} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.851001   65087 main.go:141] libmachine: (no-preload-118016) DBG | skip adding static IP to network mk-no-preload-118016 - found existing host DHCP lease matching {name: "no-preload-118016", mac: "52:54:00:be:41:20", ip: "192.168.61.137"}
	I0804 00:15:03.851015   65087 main.go:141] libmachine: (no-preload-118016) Reserved static IP address: 192.168.61.137
	I0804 00:15:03.851030   65087 main.go:141] libmachine: (no-preload-118016) Waiting for SSH to be available...
	I0804 00:15:03.851048   65087 main.go:141] libmachine: (no-preload-118016) DBG | Getting to WaitForSSH function...
	I0804 00:15:03.853316   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.853676   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.853705   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.853819   65087 main.go:141] libmachine: (no-preload-118016) DBG | Using SSH client type: external
	I0804 00:15:03.853850   65087 main.go:141] libmachine: (no-preload-118016) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa (-rw-------)
	I0804 00:15:03.853886   65087 main.go:141] libmachine: (no-preload-118016) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:03.853901   65087 main.go:141] libmachine: (no-preload-118016) DBG | About to run SSH command:
	I0804 00:15:03.853913   65087 main.go:141] libmachine: (no-preload-118016) DBG | exit 0
	I0804 00:15:03.981414   65087 main.go:141] libmachine: (no-preload-118016) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:03.981807   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetConfigRaw
	I0804 00:15:03.982419   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:03.985062   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.985400   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.985433   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.985674   65087 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/config.json ...
	I0804 00:15:03.985857   65087 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:03.985873   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:03.986090   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:03.988490   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.988798   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.988826   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.989017   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:03.989183   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:03.989342   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:03.989510   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:03.989697   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:03.989916   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:03.989927   65087 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:04.106042   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:04.106090   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.106372   65087 buildroot.go:166] provisioning hostname "no-preload-118016"
	I0804 00:15:04.106398   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.106594   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.109434   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.109777   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.109803   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.109919   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.110092   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.110248   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.110423   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.110582   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.110749   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.110764   65087 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-118016 && echo "no-preload-118016" | sudo tee /etc/hostname
	I0804 00:15:04.239856   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-118016
	
	I0804 00:15:04.239884   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.242877   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.243241   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.243271   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.243486   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.243712   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.243897   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.244046   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.244232   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.244420   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.244443   65087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-118016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-118016/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-118016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:04.367259   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:04.367289   65087 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:04.367330   65087 buildroot.go:174] setting up certificates
	I0804 00:15:04.367340   65087 provision.go:84] configureAuth start
	I0804 00:15:04.367432   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.367848   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:04.370330   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.370630   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.370658   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.370744   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.372799   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.373175   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.373203   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.373308   65087 provision.go:143] copyHostCerts
	I0804 00:15:04.373386   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:04.373399   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:04.373458   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:04.373557   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:04.373565   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:04.373585   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:04.373651   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:04.373657   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:04.373675   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:04.373732   65087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.no-preload-118016 san=[127.0.0.1 192.168.61.137 localhost minikube no-preload-118016]
	I0804 00:15:04.467261   65087 provision.go:177] copyRemoteCerts
	I0804 00:15:04.467322   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:04.467347   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.469843   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.470126   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.470154   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.470297   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.470478   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.470644   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.470761   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:04.559980   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:04.585701   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 00:15:04.610270   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:15:04.633954   65087 provision.go:87] duration metric: took 266.53536ms to configureAuth
	I0804 00:15:04.633981   65087 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:04.634154   65087 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:15:04.634219   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.636880   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.637243   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.637271   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.637452   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.637664   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.637823   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.637921   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.638060   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.638234   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.638250   65087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:04.916045   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:04.916077   65087 machine.go:97] duration metric: took 930.20802ms to provisionDockerMachine
	I0804 00:15:04.916088   65087 start.go:293] postStartSetup for "no-preload-118016" (driver="kvm2")
	I0804 00:15:04.916100   65087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:04.916113   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:04.916429   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:04.916453   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.919155   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.919485   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.919514   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.919657   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.919859   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.920026   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.920166   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.012754   65087 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:05.017004   65087 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:05.017024   65087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:05.017091   65087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:05.017180   65087 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:05.017293   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:05.026980   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:05.051265   65087 start.go:296] duration metric: took 135.164451ms for postStartSetup
	I0804 00:15:05.051309   65087 fix.go:56] duration metric: took 18.608839754s for fixHost
	I0804 00:15:05.051331   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.054286   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.054683   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.054710   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.054876   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.055127   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.055321   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.055485   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.055668   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:05.055870   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:05.055882   65087 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:05.170285   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730505.141206116
	
	I0804 00:15:05.170314   65087 fix.go:216] guest clock: 1722730505.141206116
	I0804 00:15:05.170321   65087 fix.go:229] Guest: 2024-08-04 00:15:05.141206116 +0000 UTC Remote: 2024-08-04 00:15:05.051313292 +0000 UTC m=+243.154971169 (delta=89.892824ms)
	I0804 00:15:05.170341   65087 fix.go:200] guest clock delta is within tolerance: 89.892824ms
	I0804 00:15:05.170359   65087 start.go:83] releasing machines lock for "no-preload-118016", held for 18.727925423s
	I0804 00:15:05.170392   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.170673   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:05.173694   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.174084   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.174117   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.174265   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.174828   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.175015   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.175103   65087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:05.175145   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.175263   65087 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:05.175286   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.177906   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178280   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.178307   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178329   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178470   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.178688   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.178777   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.178832   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178854   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.178945   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.179025   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.179111   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.179265   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.179417   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.282397   65087 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:05.288682   65087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:05.434388   65087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:05.440857   65087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:05.440937   65087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:05.461853   65087 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:05.461879   65087 start.go:495] detecting cgroup driver to use...
	I0804 00:15:05.461944   65087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:05.478397   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:05.494093   65087 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:05.494151   65087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:05.509391   65087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:05.524127   65087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:05.640185   65087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:05.784994   65087 docker.go:233] disabling docker service ...
	I0804 00:15:05.785071   65087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:05.802802   65087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:05.818424   65087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:05.970147   65087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:06.099759   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:06.114434   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:06.132989   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:06.433914   65087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0804 00:15:06.433969   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.452155   65087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:06.452245   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.464730   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.475848   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.488341   65087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:06.501984   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.514776   65087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.534773   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.547076   65087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:06.558639   65087 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:06.558695   65087 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:06.572920   65087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:06.583298   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:06.705307   65087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:06.845776   65087 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:06.845840   65087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:06.851710   65087 start.go:563] Will wait 60s for crictl version
	I0804 00:15:06.851764   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:06.855899   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:06.904392   65087 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:06.904493   65087 ssh_runner.go:195] Run: crio --version
	I0804 00:15:06.932866   65087 ssh_runner.go:195] Run: crio --version
	I0804 00:15:06.963071   65087 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0804 00:15:05.195984   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Start
	I0804 00:15:05.196175   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring networks are active...
	I0804 00:15:05.196904   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring network default is active
	I0804 00:15:05.197256   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring network mk-default-k8s-diff-port-969068 is active
	I0804 00:15:05.197709   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Getting domain xml...
	I0804 00:15:05.198474   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Creating domain...
	I0804 00:15:06.489009   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting to get IP...
	I0804 00:15:06.490137   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.490569   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.490641   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:06.490549   66290 retry.go:31] will retry after 298.701839ms: waiting for machine to come up
	I0804 00:15:06.791467   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.791938   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.791960   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:06.791894   66290 retry.go:31] will retry after 373.395742ms: waiting for machine to come up
	I0804 00:15:07.166622   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.167108   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.167139   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:07.167048   66290 retry.go:31] will retry after 404.799649ms: waiting for machine to come up
	I0804 00:15:06.995779   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.495822   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.995970   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.495870   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.996379   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.495852   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.495912   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.996591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:11.495964   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.964314   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:06.967088   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:06.967517   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:06.967547   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:06.967787   65087 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:06.973133   65087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:06.990153   65087 kubeadm.go:883] updating cluster {Name:no-preload-118016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:06.990339   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.297536   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.591746   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.874720   65087 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0804 00:15:07.874798   65087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:07.914104   65087 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0804 00:15:07.914127   65087 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:15:07.914172   65087 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:07.914212   65087 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:07.914237   65087 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0804 00:15:07.914253   65087 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:07.914324   65087 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:07.914374   65087 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:07.914225   65087 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:07.914374   65087 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:07.915814   65087 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:07.915833   65087 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:07.915838   65087 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:07.915816   65087 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:07.915814   65087 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0804 00:15:07.915882   65087 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:07.915962   65087 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:07.916150   65087 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.048225   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.050828   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.051873   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.056880   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.087643   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.091720   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0804 00:15:08.116485   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.173591   65087 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0804 00:15:08.173642   65087 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.173686   65087 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0804 00:15:08.173704   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.173725   65087 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.173777   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.191254   65087 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0804 00:15:08.191298   65087 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.191352   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.195238   65087 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0804 00:15:08.195290   65087 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.195340   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.246005   65087 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0804 00:15:08.246048   65087 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.246100   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.336855   65087 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0804 00:15:08.336936   65087 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.336945   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.336965   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.337078   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.337120   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.337161   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.337207   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.425270   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.425297   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.425296   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:08.425455   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.425522   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:08.458378   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0804 00:15:08.458520   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:08.460719   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:08.460827   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:08.460889   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0804 00:15:08.460983   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:08.492690   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0804 00:15:08.492789   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492808   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.492839   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:08.492852   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.492863   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492932   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492976   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0804 00:15:08.493036   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0804 00:15:08.763401   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:11.063302   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.570424927s)
	I0804 00:15:11.063326   65087 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0: (2.570469177s)
	I0804 00:15:11.063341   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0804 00:15:11.063348   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0804 00:15:11.063355   65087 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:11.063377   65087 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.299939136s)
	I0804 00:15:11.063414   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:11.063438   65087 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0804 00:15:11.063468   65087 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:11.063516   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:07.573639   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.574103   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.574150   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:07.574068   66290 retry.go:31] will retry after 552.033422ms: waiting for machine to come up
	I0804 00:15:08.127755   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.128317   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.128345   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:08.128254   66290 retry.go:31] will retry after 601.661676ms: waiting for machine to come up
	I0804 00:15:08.731160   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.731571   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.731596   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:08.731526   66290 retry.go:31] will retry after 899.954536ms: waiting for machine to come up
	I0804 00:15:09.632769   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:09.633217   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:09.633275   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:09.633188   66290 retry.go:31] will retry after 1.096119877s: waiting for machine to come up
	I0804 00:15:10.731586   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:10.732092   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:10.732116   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:10.732062   66290 retry.go:31] will retry after 1.09033143s: waiting for machine to come up
	I0804 00:15:11.824287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:11.824697   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:11.824723   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:11.824648   66290 retry.go:31] will retry after 1.458040473s: waiting for machine to come up
	I0804 00:15:11.996494   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.496005   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.996429   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.496310   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.996525   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.495995   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.996172   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.495809   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.996016   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:16.496210   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.840723   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.777281435s)
	I0804 00:15:14.840759   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0804 00:15:14.840758   65087 ssh_runner.go:235] Completed: which crictl: (3.777229082s)
	I0804 00:15:14.840769   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:14.840815   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:14.840815   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:14.894482   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0804 00:15:14.894607   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:16.729218   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (1.888374505s)
	I0804 00:15:16.729270   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0804 00:15:16.729277   65087 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.834630766s)
	I0804 00:15:16.729304   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:16.729312   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0804 00:15:16.729368   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:13.284961   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:13.285403   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:13.285435   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:13.285332   66290 retry.go:31] will retry after 2.307816709s: waiting for machine to come up
	I0804 00:15:15.594435   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:15.594855   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:15.594885   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:15.594804   66290 retry.go:31] will retry after 2.83542957s: waiting for machine to come up
	I0804 00:15:16.996765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.496069   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.995828   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.495847   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.996276   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.496155   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.996708   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.996145   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:21.496193   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.031187   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.301792704s)
	I0804 00:15:19.031309   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0804 00:15:19.031343   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:19.031389   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:20.493093   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.461677557s)
	I0804 00:15:20.493134   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0804 00:15:20.493152   65087 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:20.493202   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:18.433690   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:18.434156   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:18.434188   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:18.434105   66290 retry.go:31] will retry after 2.563856777s: waiting for machine to come up
	I0804 00:15:20.999804   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:21.000275   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:21.000307   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:21.000236   66290 retry.go:31] will retry after 3.783170851s: waiting for machine to come up
	I0804 00:15:26.095635   64502 start.go:364] duration metric: took 52.776761645s to acquireMachinesLock for "embed-certs-877598"
	I0804 00:15:26.095695   64502 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:15:26.095703   64502 fix.go:54] fixHost starting: 
	I0804 00:15:26.096104   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:26.096143   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:26.113770   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0804 00:15:26.114303   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:26.114742   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:15:26.114768   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:26.115137   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:26.115330   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:26.115508   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:15:26.117156   64502 fix.go:112] recreateIfNeeded on embed-certs-877598: state=Stopped err=<nil>
	I0804 00:15:26.117179   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	W0804 00:15:26.117343   64502 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:15:26.119743   64502 out.go:177] * Restarting existing kvm2 VM for "embed-certs-877598" ...
	I0804 00:15:21.996520   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.495922   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.995766   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.495923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.995770   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.496788   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.996759   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.996017   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.496445   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.363529   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.870304087s)
	I0804 00:15:22.363559   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0804 00:15:22.363573   65087 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:22.363618   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:23.009879   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0804 00:15:23.009924   65087 cache_images.go:123] Successfully loaded all cached images
	I0804 00:15:23.009932   65087 cache_images.go:92] duration metric: took 15.095790334s to LoadCachedImages
	I0804 00:15:23.009946   65087 kubeadm.go:934] updating node { 192.168.61.137 8443 v1.31.0-rc.0 crio true true} ...
	I0804 00:15:23.010145   65087 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-118016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:23.010230   65087 ssh_runner.go:195] Run: crio config
	I0804 00:15:23.057968   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:15:23.057991   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:23.058002   65087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:23.058022   65087 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.137 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-118016 NodeName:no-preload-118016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:23.058149   65087 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-118016"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:23.058210   65087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0804 00:15:23.068635   65087 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:23.068713   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:23.077867   65087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0804 00:15:23.094220   65087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0804 00:15:23.110798   65087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0804 00:15:23.132230   65087 ssh_runner.go:195] Run: grep 192.168.61.137	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:23.136622   65087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:23.149229   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:23.284623   65087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:23.309115   65087 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016 for IP: 192.168.61.137
	I0804 00:15:23.309212   65087 certs.go:194] generating shared ca certs ...
	I0804 00:15:23.309242   65087 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:23.309451   65087 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:23.309509   65087 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:23.309525   65087 certs.go:256] generating profile certs ...
	I0804 00:15:23.309633   65087 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.key
	I0804 00:15:23.309718   65087 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.key.794a08a1
	I0804 00:15:23.309775   65087 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.key
	I0804 00:15:23.309951   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:23.309992   65087 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:23.310006   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:23.310050   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:23.310084   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:23.310125   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:23.310186   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:23.310811   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:23.346479   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:23.390508   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:23.419626   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:23.453891   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 00:15:23.481597   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:23.507749   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:23.537567   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:15:23.565469   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:23.590844   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:23.618748   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:23.645921   65087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:23.664034   65087 ssh_runner.go:195] Run: openssl version
	I0804 00:15:23.670083   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:23.681080   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.685717   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.685777   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.691573   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:23.702260   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:23.713185   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.717747   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.717803   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.723598   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:23.734445   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:23.745394   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.750239   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.750312   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.756471   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:23.767795   65087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:23.772483   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:23.778613   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:23.784560   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:23.790455   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:23.796260   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:23.802405   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:23.808623   65087 kubeadm.go:392] StartCluster: {Name:no-preload-118016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:23.808710   65087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:23.808753   65087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:23.857908   65087 cri.go:89] found id: ""
	I0804 00:15:23.857983   65087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:23.868694   65087 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:23.868717   65087 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:23.868789   65087 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:23.878826   65087 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:23.879879   65087 kubeconfig.go:125] found "no-preload-118016" server: "https://192.168.61.137:8443"
	I0804 00:15:23.882653   65087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:23.893441   65087 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.137
	I0804 00:15:23.893475   65087 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:23.893489   65087 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:23.893533   65087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:23.933954   65087 cri.go:89] found id: ""
	I0804 00:15:23.934026   65087 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:23.951080   65087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:23.962250   65087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:23.962274   65087 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:23.962327   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:15:23.971760   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:23.971817   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:23.981767   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:15:23.991443   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:23.991494   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:24.001911   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:15:24.011927   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:24.011988   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:24.022349   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:15:24.032305   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:24.032371   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:24.042416   65087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:24.052403   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:24.163413   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.106900   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.323496   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.410928   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.569137   65087 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:25.569221   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.069288   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.570343   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.615965   65087 api_server.go:72] duration metric: took 1.046825245s to wait for apiserver process to appear ...
	I0804 00:15:26.615997   65087 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:26.616022   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:26.616618   65087 api_server.go:269] stopped: https://192.168.61.137:8443/healthz: Get "https://192.168.61.137:8443/healthz": dial tcp 192.168.61.137:8443: connect: connection refused
	I0804 00:15:24.788329   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.788775   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Found IP for machine: 192.168.39.132
	I0804 00:15:24.788799   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has current primary IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.788811   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Reserving static IP address...
	I0804 00:15:24.789238   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-969068", mac: "52:54:00:60:ac:10", ip: "192.168.39.132"} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.789266   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | skip adding static IP to network mk-default-k8s-diff-port-969068 - found existing host DHCP lease matching {name: "default-k8s-diff-port-969068", mac: "52:54:00:60:ac:10", ip: "192.168.39.132"}
	I0804 00:15:24.789287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Reserved static IP address: 192.168.39.132
	I0804 00:15:24.789303   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for SSH to be available...
	I0804 00:15:24.789333   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Getting to WaitForSSH function...
	I0804 00:15:24.791371   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.791734   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.791762   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.791904   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Using SSH client type: external
	I0804 00:15:24.791934   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa (-rw-------)
	I0804 00:15:24.791975   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:24.791994   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | About to run SSH command:
	I0804 00:15:24.792010   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | exit 0
	I0804 00:15:24.921420   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:24.921795   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetConfigRaw
	I0804 00:15:24.922375   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:24.925074   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.925403   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.925431   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.925680   65441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:15:24.925904   65441 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:24.925924   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:24.926120   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:24.928597   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.929006   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.929045   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.929171   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:24.929334   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:24.929498   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:24.929634   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:24.929814   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:24.930001   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:24.930012   65441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:25.046325   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:25.046355   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.046703   65441 buildroot.go:166] provisioning hostname "default-k8s-diff-port-969068"
	I0804 00:15:25.046733   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.046940   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.049807   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.050383   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.050427   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.050547   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.050739   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.050937   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.051131   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.051296   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.051504   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.051525   65441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-969068 && echo "default-k8s-diff-port-969068" | sudo tee /etc/hostname
	I0804 00:15:25.182512   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-969068
	
	I0804 00:15:25.182552   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.185673   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.186019   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.186051   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.186241   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.186425   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.186551   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.186660   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.186853   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.187034   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.187051   65441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-969068' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-969068/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-969068' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:25.313435   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:25.313470   65441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:25.313518   65441 buildroot.go:174] setting up certificates
	I0804 00:15:25.313531   65441 provision.go:84] configureAuth start
	I0804 00:15:25.313544   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.313856   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:25.316883   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.317233   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.317287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.317475   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.319773   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.320180   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.320214   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.320404   65441 provision.go:143] copyHostCerts
	I0804 00:15:25.320459   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:25.320467   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:25.320531   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:25.320666   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:25.320675   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:25.320702   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:25.320769   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:25.320777   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:25.320804   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:25.320871   65441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-969068 san=[127.0.0.1 192.168.39.132 default-k8s-diff-port-969068 localhost minikube]
	I0804 00:15:25.374535   65441 provision.go:177] copyRemoteCerts
	I0804 00:15:25.374590   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:25.374613   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.377629   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.378047   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.378073   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.378254   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.378478   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.378672   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.378897   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:25.469632   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:25.495826   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0804 00:15:25.527006   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:15:25.557603   65441 provision.go:87] duration metric: took 244.055462ms to configureAuth
	I0804 00:15:25.557637   65441 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:25.557873   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:25.557982   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.560974   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.561339   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.561389   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.561570   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.561740   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.561881   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.562043   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.562248   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.562456   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.562471   65441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:25.835452   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:25.835480   65441 machine.go:97] duration metric: took 909.563441ms to provisionDockerMachine
	I0804 00:15:25.835496   65441 start.go:293] postStartSetup for "default-k8s-diff-port-969068" (driver="kvm2")
	I0804 00:15:25.835512   65441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:25.835541   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:25.835846   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:25.835873   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.838713   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.839124   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.839151   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.839287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.839465   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.839634   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.839779   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:25.928376   65441 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:25.932472   65441 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:25.932498   65441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:25.932608   65441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:25.932775   65441 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:25.932951   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:25.943100   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:25.969517   65441 start.go:296] duration metric: took 134.003956ms for postStartSetup
	I0804 00:15:25.969567   65441 fix.go:56] duration metric: took 20.799045329s for fixHost
	I0804 00:15:25.969591   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.972743   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.973172   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.973204   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.973342   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.973596   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.973768   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.973944   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.974158   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.974330   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.974343   65441 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:26.095438   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730526.053053982
	
	I0804 00:15:26.095462   65441 fix.go:216] guest clock: 1722730526.053053982
	I0804 00:15:26.095472   65441 fix.go:229] Guest: 2024-08-04 00:15:26.053053982 +0000 UTC Remote: 2024-08-04 00:15:25.969572309 +0000 UTC m=+213.641216658 (delta=83.481673ms)
	I0804 00:15:26.095524   65441 fix.go:200] guest clock delta is within tolerance: 83.481673ms
	I0804 00:15:26.095534   65441 start.go:83] releasing machines lock for "default-k8s-diff-port-969068", held for 20.925048627s
	I0804 00:15:26.095570   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.095862   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:26.098718   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.099112   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.099145   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.099305   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.099929   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.100108   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.100182   65441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:26.100222   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:26.100347   65441 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:26.100388   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:26.103393   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.103720   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.103942   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.103963   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.104142   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:26.104159   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.104243   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.104347   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:26.104384   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:26.104499   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:26.104545   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:26.104718   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:26.104728   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:26.104881   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:26.214704   65441 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:26.221287   65441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:26.378021   65441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:26.385673   65441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:26.385751   65441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:26.403073   65441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:26.403104   65441 start.go:495] detecting cgroup driver to use...
	I0804 00:15:26.403193   65441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:26.421108   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:26.435556   65441 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:26.435627   65441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:26.455219   65441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:26.477841   65441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:26.626980   65441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:26.806808   65441 docker.go:233] disabling docker service ...
	I0804 00:15:26.806887   65441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:26.824079   65441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:26.839225   65441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:26.967375   65441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:27.136156   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:27.151822   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:27.173326   65441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:15:27.173404   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.184431   65441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:27.184509   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.194890   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.208349   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.222326   65441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:27.237212   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.249571   65441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.274913   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.288929   65441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:27.305789   65441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:27.305863   65441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:27.321708   65441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:27.332129   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:27.482279   65441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:27.638388   65441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:27.638465   65441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:27.644607   65441 start.go:563] Will wait 60s for crictl version
	I0804 00:15:27.644665   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:15:27.648663   65441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:27.691731   65441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:27.691824   65441 ssh_runner.go:195] Run: crio --version
	I0804 00:15:27.731365   65441 ssh_runner.go:195] Run: crio --version
	I0804 00:15:27.767416   65441 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:15:26.121074   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Start
	I0804 00:15:26.121263   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring networks are active...
	I0804 00:15:26.122075   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring network default is active
	I0804 00:15:26.122471   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring network mk-embed-certs-877598 is active
	I0804 00:15:26.122884   64502 main.go:141] libmachine: (embed-certs-877598) Getting domain xml...
	I0804 00:15:26.123684   64502 main.go:141] libmachine: (embed-certs-877598) Creating domain...
	I0804 00:15:27.536026   64502 main.go:141] libmachine: (embed-certs-877598) Waiting to get IP...
	I0804 00:15:27.537165   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:27.537650   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:27.537734   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:27.537654   66522 retry.go:31] will retry after 277.473157ms: waiting for machine to come up
	I0804 00:15:27.817330   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:27.817824   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:27.817858   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:27.817788   66522 retry.go:31] will retry after 322.160841ms: waiting for machine to come up
	I0804 00:15:28.141287   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.141818   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.141855   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.141775   66522 retry.go:31] will retry after 325.833359ms: waiting for machine to come up
	I0804 00:15:28.469440   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.469976   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.470015   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.469933   66522 retry.go:31] will retry after 372.304971ms: waiting for machine to come up
	I0804 00:15:28.843604   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.844376   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.844400   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.844297   66522 retry.go:31] will retry after 607.361674ms: waiting for machine to come up
	I0804 00:15:29.453082   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:29.453557   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:29.453586   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:29.453527   66522 retry.go:31] will retry after 615.002468ms: waiting for machine to come up
	I0804 00:15:30.070598   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:30.071112   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:30.071134   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:30.071079   66522 retry.go:31] will retry after 834.292107ms: waiting for machine to come up
	I0804 00:15:27.116719   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.030589   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:30.030625   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:30.030641   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.091459   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:30.091494   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:30.116633   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.149335   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:30.149394   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:30.617009   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.622086   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:30.622117   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:31.116320   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:31.125065   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:31.125143   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:31.617091   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:31.627142   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0804 00:15:31.636371   65087 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:15:31.636405   65087 api_server.go:131] duration metric: took 5.020400356s to wait for apiserver health ...
	I0804 00:15:31.636414   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:15:31.636420   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:31.638145   65087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:26.996399   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.496810   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.995825   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.496395   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.996561   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.496735   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.996542   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.496406   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.996259   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.496307   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.639553   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:15:31.658269   65087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:15:31.685188   65087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:15:31.703581   65087 system_pods.go:59] 8 kube-system pods found
	I0804 00:15:31.703627   65087 system_pods.go:61] "coredns-6f6b679f8f-9vdxc" [fd645695-cc1d-4394-96b0-832f48e9cf26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:15:31.703638   65087 system_pods.go:61] "etcd-no-preload-118016" [a329ecd7-7574-48f4-a776-7b7c05465f8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:15:31.703649   65087 system_pods.go:61] "kube-apiserver-no-preload-118016" [43d313aa-1844-488d-8925-b744f504323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:15:31.703661   65087 system_pods.go:61] "kube-controller-manager-no-preload-118016" [d56a5461-29d3-47f7-95df-a7fc6b52ca2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:15:31.703669   65087 system_pods.go:61] "kube-proxy-8bcg7" [c2b43118-5216-41bf-9f16-00f11ca1eab5] Running
	I0804 00:15:31.703678   65087 system_pods.go:61] "kube-scheduler-no-preload-118016" [53dc528c-2f00-4ca6-86c6-d02f4533229d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:15:31.703687   65087 system_pods.go:61] "metrics-server-6867b74b74-5xfgz" [c558b60d-3816-406a-addb-96cd42266bd1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:15:31.703698   65087 system_pods.go:61] "storage-provisioner" [1edb442e-272f-4ef7-b3fb-7c46b915c61a] Running
	I0804 00:15:31.703707   65087 system_pods.go:74] duration metric: took 18.49198ms to wait for pod list to return data ...
	I0804 00:15:31.703721   65087 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:15:31.712702   65087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:15:31.712735   65087 node_conditions.go:123] node cpu capacity is 2
	I0804 00:15:31.712748   65087 node_conditions.go:105] duration metric: took 9.019815ms to run NodePressure ...
	I0804 00:15:31.712773   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:27.768972   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:27.772437   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:27.772860   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:27.772903   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:27.773135   65441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:27.777834   65441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:27.792279   65441 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:27.792437   65441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:15:27.792493   65441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:27.833330   65441 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:15:27.833453   65441 ssh_runner.go:195] Run: which lz4
	I0804 00:15:27.837836   65441 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:15:27.842093   65441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:15:27.842128   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:15:29.410529   65441 crio.go:462] duration metric: took 1.572735301s to copy over tarball
	I0804 00:15:29.410610   65441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:15:32.062492   65441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.651848511s)
	I0804 00:15:32.062533   65441 crio.go:469] duration metric: took 2.651972207s to extract the tarball
	I0804 00:15:32.062545   65441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:15:32.100003   65441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:32.144166   65441 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:15:32.144192   65441 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:15:32.144201   65441 kubeadm.go:934] updating node { 192.168.39.132 8444 v1.30.3 crio true true} ...
	I0804 00:15:32.144327   65441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-969068 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:32.144434   65441 ssh_runner.go:195] Run: crio config
	I0804 00:15:32.197593   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:15:32.197618   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:32.197630   65441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:32.197658   65441 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-969068 NodeName:default-k8s-diff-port-969068 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:32.197862   65441 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.132
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-969068"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:32.197937   65441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:15:32.208469   65441 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:32.208551   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:32.218194   65441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0804 00:15:32.237731   65441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:15:32.259599   65441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0804 00:15:32.281113   65441 ssh_runner.go:195] Run: grep 192.168.39.132	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:32.285559   65441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:32.298722   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:30.906612   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:30.907056   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:30.907086   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:30.907012   66522 retry.go:31] will retry after 1.489076061s: waiting for machine to come up
	I0804 00:15:32.397239   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:32.397614   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:32.397642   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:32.397568   66522 retry.go:31] will retry after 1.737097329s: waiting for machine to come up
	I0804 00:15:34.135859   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:34.136363   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:34.136393   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:34.136321   66522 retry.go:31] will retry after 2.154712298s: waiting for machine to come up
	I0804 00:15:31.996780   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.496164   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.996444   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.496838   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.996533   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.496300   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.996772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.495937   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.996834   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.496277   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.982926   65087 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:15:31.989888   65087 kubeadm.go:739] kubelet initialised
	I0804 00:15:31.989926   65087 kubeadm.go:740] duration metric: took 6.968445ms waiting for restarted kubelet to initialise ...
	I0804 00:15:31.989938   65087 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:31.997210   65087 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:34.748142   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:32.432400   65441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:32.450525   65441 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068 for IP: 192.168.39.132
	I0804 00:15:32.450548   65441 certs.go:194] generating shared ca certs ...
	I0804 00:15:32.450571   65441 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:32.450738   65441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:32.450801   65441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:32.450815   65441 certs.go:256] generating profile certs ...
	I0804 00:15:32.450922   65441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.key
	I0804 00:15:32.451000   65441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.key.a17bd5dd
	I0804 00:15:32.451053   65441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.key
	I0804 00:15:32.451199   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:32.451242   65441 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:32.451255   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:32.451279   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:32.451303   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:32.451326   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:32.451365   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:32.451910   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:32.505178   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:32.557546   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:32.596512   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:32.635476   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0804 00:15:32.687156   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:32.716537   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:32.746312   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:15:32.777788   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:32.806730   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:32.835822   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:32.864241   65441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:32.886754   65441 ssh_runner.go:195] Run: openssl version
	I0804 00:15:32.893177   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:32.904847   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.909871   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.909937   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.916357   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:32.927322   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:32.939447   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.944221   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.944275   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.950218   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:32.966506   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:32.981288   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.986761   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.986831   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.993077   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:33.007428   65441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:33.013290   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:33.019997   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:33.026423   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:33.033004   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:33.039205   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:33.045367   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:33.051462   65441 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:33.051546   65441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:33.051605   65441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:33.094354   65441 cri.go:89] found id: ""
	I0804 00:15:33.094433   65441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:33.105416   65441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:33.105439   65441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:33.105480   65441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:33.115838   65441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:33.117466   65441 kubeconfig.go:125] found "default-k8s-diff-port-969068" server: "https://192.168.39.132:8444"
	I0804 00:15:33.120806   65441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:33.130533   65441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.132
	I0804 00:15:33.130567   65441 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:33.130579   65441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:33.130628   65441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:33.178718   65441 cri.go:89] found id: ""
	I0804 00:15:33.178813   65441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:33.199000   65441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:33.212169   65441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:33.212188   65441 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:33.212255   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0804 00:15:33.225192   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:33.225254   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:33.239194   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0804 00:15:33.252402   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:33.252470   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:33.265198   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0804 00:15:33.276564   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:33.276636   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:33.288785   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0804 00:15:33.299848   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:33.299904   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:33.311115   65441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:33.322121   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:33.442578   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.526815   65441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084197731s)
	I0804 00:15:34.526857   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.803105   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.893681   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.978573   65441 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:34.978668   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.479179   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.979520   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.063056   65441 api_server.go:72] duration metric: took 1.084463955s to wait for apiserver process to appear ...
	I0804 00:15:36.063161   65441 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:36.063203   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:36.063755   65441 api_server.go:269] stopped: https://192.168.39.132:8444/healthz: Get "https://192.168.39.132:8444/healthz": dial tcp 192.168.39.132:8444: connect: connection refused
	I0804 00:15:36.563501   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:36.293051   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:36.293675   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:36.293710   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:36.293604   66522 retry.go:31] will retry after 2.826050203s: waiting for machine to come up
	I0804 00:15:39.120961   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:39.121602   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:39.121628   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:39.121554   66522 retry.go:31] will retry after 2.710829438s: waiting for machine to come up
	I0804 00:15:36.996761   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.495885   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.995785   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.496550   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.996645   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.995851   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.496685   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.995896   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:41.495864   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.005216   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:39.505397   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:39.405829   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:39.405895   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:39.405913   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:39.433026   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:39.433063   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:39.563242   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:39.568554   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:39.568591   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:40.064078   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:40.085940   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:40.085978   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:40.564041   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:40.569785   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:40.569812   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:41.063334   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:41.068113   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:41.068135   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:41.563691   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:41.569214   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:41.569248   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:42.063737   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:42.068227   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:42.068260   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:42.563309   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:42.567740   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:42.567775   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:43.063306   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:43.067611   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 200:
	ok
	I0804 00:15:43.073842   65441 api_server.go:141] control plane version: v1.30.3
	I0804 00:15:43.073868   65441 api_server.go:131] duration metric: took 7.010684682s to wait for apiserver health ...
	I0804 00:15:43.073879   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:15:43.073887   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:43.075779   65441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:43.077123   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:15:43.088611   65441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:15:43.109845   65441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:15:43.119204   65441 system_pods.go:59] 8 kube-system pods found
	I0804 00:15:43.119235   65441 system_pods.go:61] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:15:43.119246   65441 system_pods.go:61] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:15:43.119259   65441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:15:43.119269   65441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:15:43.119275   65441 system_pods.go:61] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:15:43.119282   65441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:15:43.119300   65441 system_pods.go:61] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:15:43.119309   65441 system_pods.go:61] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:15:43.119317   65441 system_pods.go:74] duration metric: took 9.453775ms to wait for pod list to return data ...
	I0804 00:15:43.119328   65441 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:15:43.122493   65441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:15:43.122516   65441 node_conditions.go:123] node cpu capacity is 2
	I0804 00:15:43.122528   65441 node_conditions.go:105] duration metric: took 3.191087ms to run NodePressure ...
	I0804 00:15:43.122547   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:43.391258   65441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:15:43.395252   65441 kubeadm.go:739] kubelet initialised
	I0804 00:15:43.395274   65441 kubeadm.go:740] duration metric: took 3.992079ms waiting for restarted kubelet to initialise ...
	I0804 00:15:43.395282   65441 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:43.400173   65441 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.404618   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.404645   65441 pod_ready.go:81] duration metric: took 4.449232ms for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.404665   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.404675   65441 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.409134   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.409165   65441 pod_ready.go:81] duration metric: took 4.471898ms for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.409178   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.409190   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.414342   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.414362   65441 pod_ready.go:81] duration metric: took 5.160435ms for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.414374   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.414383   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.513956   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.513987   65441 pod_ready.go:81] duration metric: took 99.59507ms for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.514003   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.514033   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.913592   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-proxy-zz7fr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.913619   65441 pod_ready.go:81] duration metric: took 399.572927ms for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.913628   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-proxy-zz7fr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.913634   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.313833   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.313864   65441 pod_ready.go:81] duration metric: took 400.220214ms for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:44.313878   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.313886   65441 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.713583   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.713616   65441 pod_ready.go:81] duration metric: took 399.716432ms for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:44.713636   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.713647   65441 pod_ready.go:38] duration metric: took 1.318356042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:44.713666   65441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:15:44.725908   65441 ops.go:34] apiserver oom_adj: -16
	I0804 00:15:44.725935   65441 kubeadm.go:597] duration metric: took 11.620489409s to restartPrimaryControlPlane
	I0804 00:15:44.725947   65441 kubeadm.go:394] duration metric: took 11.674491721s to StartCluster
	I0804 00:15:44.725966   65441 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:44.726046   65441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:15:44.728392   65441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:44.728702   65441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:15:44.728805   65441 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:15:44.728895   65441 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.728942   65441 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.728954   65441 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:15:44.728958   65441 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.728990   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.728967   65441 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.729027   65441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-969068"
	I0804 00:15:44.729039   65441 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.729054   65441 addons.go:243] addon metrics-server should already be in state true
	I0804 00:15:44.729143   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.729436   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729470   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.729515   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729564   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729598   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.729642   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.728909   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:44.730486   65441 out.go:177] * Verifying Kubernetes components...
	I0804 00:15:44.731972   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:44.748737   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0804 00:15:44.749200   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I0804 00:15:44.749311   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I0804 00:15:44.749582   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.749691   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.749858   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.750128   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750144   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750153   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750171   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750326   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750347   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750609   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.750617   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.750810   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.751212   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.751249   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.751286   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.751733   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.751780   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.754574   65441 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.754616   65441 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:15:44.754649   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.755038   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.755080   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.769763   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0804 00:15:44.770311   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.770828   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.770850   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.771209   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.771371   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.771935   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43081
	I0804 00:15:44.773284   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.773416   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I0804 00:15:44.773646   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.773854   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.773866   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.773981   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.774227   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.774529   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.774551   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.774665   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.774711   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.774938   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.775078   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.776166   65441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:15:44.776690   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.777692   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:15:44.777708   65441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:15:44.777724   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.778473   65441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:41.833728   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:41.834246   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:41.834270   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:41.834210   66522 retry.go:31] will retry after 2.891635961s: waiting for machine to come up
	I0804 00:15:44.727424   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.727895   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has current primary IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.727919   64502 main.go:141] libmachine: (embed-certs-877598) Found IP for machine: 192.168.50.140
	I0804 00:15:44.727943   64502 main.go:141] libmachine: (embed-certs-877598) Reserving static IP address...
	I0804 00:15:44.728570   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "embed-certs-877598", mac: "52:54:00:86:aa:38", ip: "192.168.50.140"} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.728602   64502 main.go:141] libmachine: (embed-certs-877598) DBG | skip adding static IP to network mk-embed-certs-877598 - found existing host DHCP lease matching {name: "embed-certs-877598", mac: "52:54:00:86:aa:38", ip: "192.168.50.140"}
	I0804 00:15:44.728617   64502 main.go:141] libmachine: (embed-certs-877598) Reserved static IP address: 192.168.50.140
	I0804 00:15:44.728634   64502 main.go:141] libmachine: (embed-certs-877598) Waiting for SSH to be available...
	I0804 00:15:44.728648   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Getting to WaitForSSH function...
	I0804 00:15:44.731684   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.732102   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.732137   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.732388   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Using SSH client type: external
	I0804 00:15:44.732408   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa (-rw-------)
	I0804 00:15:44.732438   64502 main.go:141] libmachine: (embed-certs-877598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:44.732448   64502 main.go:141] libmachine: (embed-certs-877598) DBG | About to run SSH command:
	I0804 00:15:44.732462   64502 main.go:141] libmachine: (embed-certs-877598) DBG | exit 0
	I0804 00:15:44.873689   64502 main.go:141] libmachine: (embed-certs-877598) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:44.874033   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetConfigRaw
	I0804 00:15:44.874716   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:44.877406   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.877823   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.877855   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.878130   64502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/config.json ...
	I0804 00:15:44.878358   64502 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:44.878382   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:44.878563   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:44.880862   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.881215   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.881253   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.881427   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:44.881597   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:44.881785   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:44.881958   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:44.882150   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:44.882381   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:44.882399   64502 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:44.998143   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:44.998172   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:44.998534   64502 buildroot.go:166] provisioning hostname "embed-certs-877598"
	I0804 00:15:44.998564   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:44.998761   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.001998   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.002508   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.002545   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.002691   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.002847   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.003026   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.003175   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.003388   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.003592   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.003606   64502 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-877598 && echo "embed-certs-877598" | sudo tee /etc/hostname
	I0804 00:15:45.142065   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-877598
	
	I0804 00:15:45.142123   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.145427   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.145858   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.145912   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.146133   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.146279   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.146422   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.146595   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.146778   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.146991   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.147007   64502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-877598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-877598/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-877598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:45.275711   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:45.275748   64502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:45.275775   64502 buildroot.go:174] setting up certificates
	I0804 00:15:45.275790   64502 provision.go:84] configureAuth start
	I0804 00:15:45.275804   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:45.276145   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:45.279645   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.280141   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.280166   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.280298   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.283135   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.283495   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.283521   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.283693   64502 provision.go:143] copyHostCerts
	I0804 00:15:45.283754   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:45.283767   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:45.283837   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:45.283954   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:45.283975   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:45.284004   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:45.284168   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:45.284182   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:45.284214   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:45.284280   64502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.embed-certs-877598 san=[127.0.0.1 192.168.50.140 embed-certs-877598 localhost minikube]
	I0804 00:15:45.484805   64502 provision.go:177] copyRemoteCerts
	I0804 00:15:45.484861   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:45.484883   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.488177   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.488586   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.488621   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.488852   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.489032   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.489191   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.489340   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:45.580782   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:45.612118   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 00:15:45.638201   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:15:45.665741   64502 provision.go:87] duration metric: took 389.935703ms to configureAuth
	I0804 00:15:45.665778   64502 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:45.666008   64502 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:45.666110   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.668942   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.669312   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.669343   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.669589   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.669812   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.669995   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.670158   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.670317   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.670501   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.670522   64502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:44.779708   65441 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:15:44.779730   65441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:15:44.779747   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.780637   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.781098   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.781120   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.781219   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.781424   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.781593   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.781753   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.783024   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.783459   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.783479   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.783895   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.784054   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.784219   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.784343   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.793057   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
	I0804 00:15:44.793581   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.794075   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.794094   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.794413   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.794586   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.796274   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.796609   65441 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:15:44.796623   65441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:15:44.796643   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.799445   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.799990   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.800254   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.800698   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.800864   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.800974   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.801305   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.962413   65441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:44.983596   65441 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-969068" to be "Ready" ...
	I0804 00:15:45.057238   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:15:45.057261   65441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:15:45.082722   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:15:45.082745   65441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:15:45.088213   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:15:45.115230   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:15:45.115261   65441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:15:45.115325   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:15:45.164676   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:15:45.502008   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.502040   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.502381   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:45.502440   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.502463   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:45.502476   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.502484   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.502701   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.502718   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:45.510043   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.510065   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.510305   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:45.510353   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.510364   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.217233   65441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.101870491s)
	I0804 00:15:46.217295   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.217308   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.217585   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.217609   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.217625   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.217652   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.217719   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.218073   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.218091   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.218104   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.255756   65441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.091044347s)
	I0804 00:15:46.255802   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.255819   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.256053   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.256093   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.256101   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.256110   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.256117   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.256412   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.256446   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.256454   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.256465   65441 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-969068"
	I0804 00:15:46.258662   65441 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0804 00:15:41.995808   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.496612   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.996566   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.495812   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.996095   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.495902   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.996724   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.495854   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.996354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:46.496185   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.005235   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:44.003809   65087 pod_ready.go:92] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.003847   65087 pod_ready.go:81] duration metric: took 12.006609818s for pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.003861   65087 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.009518   65087 pod_ready.go:92] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.009541   65087 pod_ready.go:81] duration metric: took 5.671724ms for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.009554   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.014897   65087 pod_ready.go:92] pod "kube-apiserver-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.014923   65087 pod_ready.go:81] duration metric: took 5.360171ms for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.014938   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.521943   65087 pod_ready.go:92] pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.521968   65087 pod_ready.go:81] duration metric: took 1.507021563s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.521983   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bcg7" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.527550   65087 pod_ready.go:92] pod "kube-proxy-8bcg7" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.527575   65087 pod_ready.go:81] duration metric: took 5.585026ms for pod "kube-proxy-8bcg7" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.527588   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.604221   65087 pod_ready.go:92] pod "kube-scheduler-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.604245   65087 pod_ready.go:81] duration metric: took 76.648502ms for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.604260   65087 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:46.260578   65441 addons.go:510] duration metric: took 1.531768603s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0804 00:15:46.988351   65441 node_ready.go:53] node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:45.985471   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:45.985501   64502 machine.go:97] duration metric: took 1.107126695s to provisionDockerMachine
	I0804 00:15:45.985514   64502 start.go:293] postStartSetup for "embed-certs-877598" (driver="kvm2")
	I0804 00:15:45.985527   64502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:45.985554   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:45.985928   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:45.985962   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.989294   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.989699   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.989731   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.989875   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.990079   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.990230   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.990355   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.085684   64502 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:46.091660   64502 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:46.091690   64502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:46.091776   64502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:46.091873   64502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:46.092005   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:46.102373   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:46.129547   64502 start.go:296] duration metric: took 144.018823ms for postStartSetup
	I0804 00:15:46.129594   64502 fix.go:56] duration metric: took 20.033890858s for fixHost
	I0804 00:15:46.129619   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.132803   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.133154   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.133190   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.133347   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.133580   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.133766   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.134016   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.134242   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:46.134454   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:46.134471   64502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:46.250499   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730546.219077490
	
	I0804 00:15:46.250528   64502 fix.go:216] guest clock: 1722730546.219077490
	I0804 00:15:46.250539   64502 fix.go:229] Guest: 2024-08-04 00:15:46.21907749 +0000 UTC Remote: 2024-08-04 00:15:46.129599456 +0000 UTC m=+355.401502879 (delta=89.478034ms)
	I0804 00:15:46.250567   64502 fix.go:200] guest clock delta is within tolerance: 89.478034ms
	I0804 00:15:46.250575   64502 start.go:83] releasing machines lock for "embed-certs-877598", held for 20.15490553s
	I0804 00:15:46.250609   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.250902   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:46.253782   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.254164   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.254194   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.254376   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.254967   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.255169   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.255247   64502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:46.255307   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.255376   64502 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:46.255399   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.260113   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260481   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.260511   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260529   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260702   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.260870   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.260995   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.261022   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.261045   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.261182   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.261208   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.261305   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.261451   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.261588   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.372061   64502 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:46.378356   64502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:46.527705   64502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:46.534567   64502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:46.534620   64502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:46.550801   64502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:46.550829   64502 start.go:495] detecting cgroup driver to use...
	I0804 00:15:46.550916   64502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:46.568369   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:46.583437   64502 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:46.583496   64502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:46.599267   64502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:46.614874   64502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:46.734467   64502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:46.900868   64502 docker.go:233] disabling docker service ...
	I0804 00:15:46.900941   64502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:46.915612   64502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:46.929948   64502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:47.056637   64502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:47.175277   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:47.190167   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:47.211062   64502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:15:47.211115   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.222459   64502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:47.222547   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.232964   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.243663   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.254387   64502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:47.266424   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.277323   64502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.296078   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.307058   64502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:47.317138   64502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:47.317223   64502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:47.332104   64502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:47.342965   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:47.464208   64502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:47.620127   64502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:47.620196   64502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:47.625103   64502 start.go:563] Will wait 60s for crictl version
	I0804 00:15:47.625165   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:15:47.628942   64502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:47.668593   64502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:47.668686   64502 ssh_runner.go:195] Run: crio --version
	I0804 00:15:47.700313   64502 ssh_runner.go:195] Run: crio --version
	I0804 00:15:47.737281   64502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:15:47.738730   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:47.741698   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:47.742098   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:47.742144   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:47.742310   64502 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:47.746883   64502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:47.760111   64502 kubeadm.go:883] updating cluster {Name:embed-certs-877598 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:47.760247   64502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:15:47.760305   64502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:47.801700   64502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:15:47.801766   64502 ssh_runner.go:195] Run: which lz4
	I0804 00:15:47.806337   64502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:15:47.811010   64502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:15:47.811050   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:15:49.359157   64502 crio.go:462] duration metric: took 1.552864688s to copy over tarball
	I0804 00:15:49.359245   64502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:15:46.996215   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.496634   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.996278   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.496184   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.996616   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.496240   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.996433   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.996600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:51.496459   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.611474   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:49.611879   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:51.616732   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:48.988818   65441 node_ready.go:53] node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:49.988196   65441 node_ready.go:49] node "default-k8s-diff-port-969068" has status "Ready":"True"
	I0804 00:15:49.988220   65441 node_ready.go:38] duration metric: took 5.004585481s for node "default-k8s-diff-port-969068" to be "Ready" ...
	I0804 00:15:49.988229   65441 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:49.994536   65441 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:50.001200   65441 pod_ready.go:92] pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:50.001229   65441 pod_ready.go:81] duration metric: took 6.665744ms for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:50.001243   65441 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:52.009436   65441 pod_ready.go:102] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:51.759772   64502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.400487256s)
	I0804 00:15:51.759836   64502 crio.go:469] duration metric: took 2.40064418s to extract the tarball
	I0804 00:15:51.759848   64502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:15:51.800038   64502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:51.845098   64502 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:15:51.845124   64502 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:15:51.845134   64502 kubeadm.go:934] updating node { 192.168.50.140 8443 v1.30.3 crio true true} ...
	I0804 00:15:51.845258   64502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-877598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:51.845339   64502 ssh_runner.go:195] Run: crio config
	I0804 00:15:51.895019   64502 cni.go:84] Creating CNI manager for ""
	I0804 00:15:51.895039   64502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:51.895048   64502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:51.895067   64502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.140 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-877598 NodeName:embed-certs-877598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:51.895202   64502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-877598"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:51.895272   64502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:15:51.906363   64502 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:51.906426   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:51.917727   64502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0804 00:15:51.936370   64502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:15:51.953894   64502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0804 00:15:51.972910   64502 ssh_runner.go:195] Run: grep 192.168.50.140	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:51.977288   64502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:51.990992   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:52.115808   64502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:52.133326   64502 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598 for IP: 192.168.50.140
	I0804 00:15:52.133373   64502 certs.go:194] generating shared ca certs ...
	I0804 00:15:52.133396   64502 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:52.133564   64502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:52.133613   64502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:52.133628   64502 certs.go:256] generating profile certs ...
	I0804 00:15:52.133736   64502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/client.key
	I0804 00:15:52.133824   64502 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.key.5511d337
	I0804 00:15:52.133873   64502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.key
	I0804 00:15:52.134013   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:52.134077   64502 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:52.134091   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:52.134130   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:52.134168   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:52.134200   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:52.134256   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:52.134880   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:52.175985   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:52.209458   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:52.239097   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:52.271037   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0804 00:15:52.317594   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:52.353485   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:52.382159   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:15:52.407478   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:52.433103   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:52.457233   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:52.481534   64502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:52.500482   64502 ssh_runner.go:195] Run: openssl version
	I0804 00:15:52.509021   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:52.522431   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.527125   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.527184   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.533310   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:52.546085   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:52.557781   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.562516   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.562587   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.568403   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:52.580431   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:52.592706   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.597280   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.597382   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.603284   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:52.616100   64502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:52.621422   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:52.631811   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:52.639130   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:52.646159   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:52.652721   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:52.659459   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:52.665864   64502 kubeadm.go:392] StartCluster: {Name:embed-certs-877598 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:52.665991   64502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:52.666044   64502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:52.711272   64502 cri.go:89] found id: ""
	I0804 00:15:52.711346   64502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:52.722294   64502 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:52.722321   64502 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:52.722380   64502 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:52.733148   64502 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:52.734706   64502 kubeconfig.go:125] found "embed-certs-877598" server: "https://192.168.50.140:8443"
	I0804 00:15:52.737995   64502 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:52.749941   64502 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.140
	I0804 00:15:52.749986   64502 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:52.749998   64502 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:52.750043   64502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:52.793295   64502 cri.go:89] found id: ""
	I0804 00:15:52.793388   64502 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:52.811438   64502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:52.824055   64502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:52.824080   64502 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:52.824128   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:15:52.835393   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:52.835446   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:52.846732   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:15:52.856889   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:52.856942   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:52.869951   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:15:52.881836   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:52.881909   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:52.894121   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:15:52.905643   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:52.905711   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:52.917063   64502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:52.929399   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:53.132145   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.096969   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.325640   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.385886   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.472926   64502 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:54.473002   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.973406   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.473410   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.578082   64502 api_server.go:72] duration metric: took 1.105154357s to wait for apiserver process to appear ...
	I0804 00:15:55.578170   64502 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:55.578207   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:55.578847   64502 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I0804 00:15:51.996447   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.496265   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.996030   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.996313   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.495823   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.996360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.496652   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.996049   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:55.996141   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:56.045001   64758 cri.go:89] found id: ""
	I0804 00:15:56.045031   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.045042   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:56.045049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:56.045114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:56.086505   64758 cri.go:89] found id: ""
	I0804 00:15:56.086535   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.086547   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:56.086554   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:56.086618   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:56.125953   64758 cri.go:89] found id: ""
	I0804 00:15:56.125983   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.125994   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:56.126001   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:56.126060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:56.167313   64758 cri.go:89] found id: ""
	I0804 00:15:56.167343   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.167354   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:56.167361   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:56.167424   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:56.211102   64758 cri.go:89] found id: ""
	I0804 00:15:56.211132   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.211142   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:56.211149   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:56.211231   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:56.246894   64758 cri.go:89] found id: ""
	I0804 00:15:56.246926   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.246937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:56.246945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:56.247012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:56.281952   64758 cri.go:89] found id: ""
	I0804 00:15:56.281980   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.281991   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:56.281998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:56.282060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:56.317685   64758 cri.go:89] found id: ""
	I0804 00:15:56.317719   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.317733   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:56.317745   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:56.317762   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:56.335040   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:56.335069   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:56.475995   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:56.476017   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:56.476033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:56.567508   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:56.567551   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:56.618136   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:56.618166   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:54.112928   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:56.112987   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:54.179330   65441 pod_ready.go:102] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:54.789712   65441 pod_ready.go:92] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.789738   65441 pod_ready.go:81] duration metric: took 4.788487591s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.789749   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.799762   65441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.799785   65441 pod_ready.go:81] duration metric: took 10.029856ms for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.799795   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.805685   65441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.805708   65441 pod_ready.go:81] duration metric: took 5.905108ms for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.805718   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.809797   65441 pod_ready.go:92] pod "kube-proxy-zz7fr" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.809818   65441 pod_ready.go:81] duration metric: took 4.094183ms for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.809827   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.820536   65441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.820557   65441 pod_ready.go:81] duration metric: took 10.722903ms for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.820567   65441 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:56.827543   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:56.078916   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:58.738609   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:58.738641   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:58.738657   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:58.772665   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:58.772695   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:59.079121   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:59.083798   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:59.083829   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:59.579242   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:59.585343   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:59.585381   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:00.078877   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:00.099981   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:00.100022   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:00.578505   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:00.582665   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:00.582692   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:59.172886   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:59.187045   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:59.187128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:59.225135   64758 cri.go:89] found id: ""
	I0804 00:15:59.225164   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.225173   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:59.225179   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:59.225255   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:59.262538   64758 cri.go:89] found id: ""
	I0804 00:15:59.262566   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.262573   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:59.262578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:59.262635   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:59.301665   64758 cri.go:89] found id: ""
	I0804 00:15:59.301697   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.301708   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:59.301715   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:59.301778   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:59.362742   64758 cri.go:89] found id: ""
	I0804 00:15:59.362766   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.362774   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:59.362779   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:59.362834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:59.404398   64758 cri.go:89] found id: ""
	I0804 00:15:59.404431   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.404509   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:59.404525   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:59.404594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:59.454257   64758 cri.go:89] found id: ""
	I0804 00:15:59.454285   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.454297   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:59.454305   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:59.454363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:59.496790   64758 cri.go:89] found id: ""
	I0804 00:15:59.496818   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.496829   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:59.496837   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:59.496896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:59.537395   64758 cri.go:89] found id: ""
	I0804 00:15:59.537424   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.537431   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:59.537439   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:59.537453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:59.600005   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:59.600042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:59.617304   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:59.617336   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:59.692828   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:59.692849   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:59.692864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:59.764000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:59.764038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:58.611600   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.110986   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.079326   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:01.083661   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:01.083689   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:01.578711   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:01.583011   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:01.583040   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:02.078606   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:02.083234   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0804 00:16:02.090079   64502 api_server.go:141] control plane version: v1.30.3
	I0804 00:16:02.090112   64502 api_server.go:131] duration metric: took 6.511921332s to wait for apiserver health ...
	I0804 00:16:02.090123   64502 cni.go:84] Creating CNI manager for ""
	I0804 00:16:02.090132   64502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:16:02.092169   64502 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:58.829268   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.327623   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:02.093704   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:16:02.109001   64502 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:16:02.131996   64502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:16:02.145300   64502 system_pods.go:59] 8 kube-system pods found
	I0804 00:16:02.145333   64502 system_pods.go:61] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:16:02.145340   64502 system_pods.go:61] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:16:02.145348   64502 system_pods.go:61] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:16:02.145370   64502 system_pods.go:61] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:16:02.145380   64502 system_pods.go:61] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:16:02.145389   64502 system_pods.go:61] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:16:02.145397   64502 system_pods.go:61] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:16:02.145403   64502 system_pods.go:61] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:16:02.145412   64502 system_pods.go:74] duration metric: took 13.393537ms to wait for pod list to return data ...
	I0804 00:16:02.145425   64502 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:16:02.149623   64502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:16:02.149651   64502 node_conditions.go:123] node cpu capacity is 2
	I0804 00:16:02.149661   64502 node_conditions.go:105] duration metric: took 4.231097ms to run NodePressure ...
	I0804 00:16:02.149677   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:16:02.424261   64502 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:16:02.429537   64502 kubeadm.go:739] kubelet initialised
	I0804 00:16:02.429555   64502 kubeadm.go:740] duration metric: took 5.269005ms waiting for restarted kubelet to initialise ...
	I0804 00:16:02.429563   64502 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:02.435433   64502 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.440580   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.440606   64502 pod_ready.go:81] duration metric: took 5.145511ms for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.440619   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.440628   64502 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.445111   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "etcd-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.445136   64502 pod_ready.go:81] duration metric: took 4.497361ms for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.445148   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "etcd-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.445157   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.450172   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.450200   64502 pod_ready.go:81] duration metric: took 5.032514ms for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.450211   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.450219   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.536314   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.536386   64502 pod_ready.go:81] duration metric: took 86.155481ms for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.536398   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.536409   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.935794   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-proxy-wk8zf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.935830   64502 pod_ready.go:81] duration metric: took 399.405535ms for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.935842   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-proxy-wk8zf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.935861   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:03.335730   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.335760   64502 pod_ready.go:81] duration metric: took 399.889478ms for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:03.335772   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.335780   64502 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:03.735762   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.735786   64502 pod_ready.go:81] duration metric: took 399.996995ms for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:03.735795   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.735802   64502 pod_ready.go:38] duration metric: took 1.306222891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:03.735818   64502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:16:03.748578   64502 ops.go:34] apiserver oom_adj: -16
	I0804 00:16:03.748602   64502 kubeadm.go:597] duration metric: took 11.026274037s to restartPrimaryControlPlane
	I0804 00:16:03.748611   64502 kubeadm.go:394] duration metric: took 11.082760058s to StartCluster
	I0804 00:16:03.748637   64502 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:16:03.748719   64502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:16:03.750554   64502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:16:03.750824   64502 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:16:03.750900   64502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:16:03.750998   64502 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-877598"
	I0804 00:16:03.751041   64502 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-877598"
	W0804 00:16:03.751053   64502 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:16:03.751051   64502 addons.go:69] Setting default-storageclass=true in profile "embed-certs-877598"
	I0804 00:16:03.751072   64502 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:16:03.751108   64502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-877598"
	I0804 00:16:03.751063   64502 addons.go:69] Setting metrics-server=true in profile "embed-certs-877598"
	I0804 00:16:03.751181   64502 addons.go:234] Setting addon metrics-server=true in "embed-certs-877598"
	W0804 00:16:03.751196   64502 addons.go:243] addon metrics-server should already be in state true
	I0804 00:16:03.751245   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.751467   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.751503   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.751540   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.751612   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.751088   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.751990   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.752017   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.752817   64502 out.go:177] * Verifying Kubernetes components...
	I0804 00:16:03.754613   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:16:03.769684   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I0804 00:16:03.769701   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37925
	I0804 00:16:03.769697   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0804 00:16:03.770197   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770332   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770619   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770792   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.770808   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.770935   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.770949   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.771125   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.771327   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.771520   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.771545   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.771555   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.771938   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.772138   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.772195   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.772521   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.772565   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.776267   64502 addons.go:234] Setting addon default-storageclass=true in "embed-certs-877598"
	W0804 00:16:03.776292   64502 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:16:03.776327   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.776695   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.776738   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.789183   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0804 00:16:03.789660   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.789796   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I0804 00:16:03.790184   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.790202   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.790246   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.790608   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.790869   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.790900   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.790985   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.791276   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.791519   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.793005   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.793338   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.795747   64502 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:16:03.795748   64502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:16:03.796208   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0804 00:16:03.796652   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.797194   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.797220   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.797589   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:16:03.797611   64502 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:16:03.797632   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.797640   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.797673   64502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:16:03.797684   64502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:16:03.797697   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.798266   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.798311   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.801933   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802083   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802417   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.802445   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802589   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.802766   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.802851   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.802868   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802936   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.803140   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.803166   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.803310   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.803409   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.803512   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.818073   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0804 00:16:03.818647   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.819107   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.819130   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.819488   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.819721   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.821982   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.822216   64502 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:16:03.822232   64502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:16:03.822251   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.825593   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.826055   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.826090   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.826356   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.826526   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.826667   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.826829   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.955019   64502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:16:03.976453   64502 node_ready.go:35] waiting up to 6m0s for node "embed-certs-877598" to be "Ready" ...
	I0804 00:16:04.051717   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:16:04.074720   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:16:04.074740   64502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:16:04.099578   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:16:04.099606   64502 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:16:04.118348   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:16:04.163390   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:16:04.163418   64502 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:16:04.227379   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:16:05.143364   64502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091613097s)
	I0804 00:16:05.143418   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143419   64502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.025041953s)
	I0804 00:16:05.143430   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143439   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143449   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143726   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.143743   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.143755   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143764   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143862   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.143893   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.143915   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.143935   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143964   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.144014   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.144033   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.144085   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.144259   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.144305   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.144319   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.150739   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.150761   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.151073   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.151102   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.151130   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.169806   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.169832   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.170103   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.170122   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.170148   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.170159   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.170171   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.170461   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.170546   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.170563   64502 addons.go:475] Verifying addon metrics-server=true in "embed-certs-877598"
	I0804 00:16:05.172575   64502 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0804 00:16:05.173964   64502 addons.go:510] duration metric: took 1.423065893s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0804 00:16:02.307325   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:02.324168   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:02.324233   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:02.370204   64758 cri.go:89] found id: ""
	I0804 00:16:02.370234   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.370250   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:02.370258   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:02.370325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:02.405586   64758 cri.go:89] found id: ""
	I0804 00:16:02.405616   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.405628   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:02.405636   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:02.405694   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:02.445644   64758 cri.go:89] found id: ""
	I0804 00:16:02.445665   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.445675   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:02.445682   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:02.445739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:02.483659   64758 cri.go:89] found id: ""
	I0804 00:16:02.483686   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.483695   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:02.483701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:02.483751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:02.519903   64758 cri.go:89] found id: ""
	I0804 00:16:02.519929   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.519938   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:02.519944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:02.519991   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:02.557373   64758 cri.go:89] found id: ""
	I0804 00:16:02.557401   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.557410   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:02.557416   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:02.557472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:02.594203   64758 cri.go:89] found id: ""
	I0804 00:16:02.594238   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.594249   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:02.594256   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:02.594316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:02.635487   64758 cri.go:89] found id: ""
	I0804 00:16:02.635512   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.635520   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:02.635529   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:02.635543   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:02.686990   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:02.687035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:02.701784   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:02.701810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:02.778626   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:02.778648   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:02.778662   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:02.856056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:02.856097   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:05.402858   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:05.418825   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:05.418900   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:05.458789   64758 cri.go:89] found id: ""
	I0804 00:16:05.458872   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.458887   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:05.458895   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:05.458967   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:05.498258   64758 cri.go:89] found id: ""
	I0804 00:16:05.498284   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.498295   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:05.498302   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:05.498364   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:05.540892   64758 cri.go:89] found id: ""
	I0804 00:16:05.540919   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.540927   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:05.540933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:05.540992   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:05.578876   64758 cri.go:89] found id: ""
	I0804 00:16:05.578911   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.578919   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:05.578924   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:05.578971   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:05.616248   64758 cri.go:89] found id: ""
	I0804 00:16:05.616272   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.616280   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:05.616285   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:05.616339   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:05.654387   64758 cri.go:89] found id: ""
	I0804 00:16:05.654419   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.654428   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:05.654436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:05.654528   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:05.695579   64758 cri.go:89] found id: ""
	I0804 00:16:05.695613   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.695625   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:05.695669   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:05.695752   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:05.740754   64758 cri.go:89] found id: ""
	I0804 00:16:05.740777   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.740785   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:05.740793   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:05.740805   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:05.792091   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:05.792126   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:05.809130   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:05.809164   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:05.888441   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:05.888465   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:05.888479   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:05.969336   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:05.969390   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:03.111834   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:05.613749   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:03.830570   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:06.328076   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:05.980692   64502 node_ready.go:53] node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:08.480205   64502 node_ready.go:53] node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:09.480127   64502 node_ready.go:49] node "embed-certs-877598" has status "Ready":"True"
	I0804 00:16:09.480147   64502 node_ready.go:38] duration metric: took 5.503660587s for node "embed-certs-877598" to be "Ready" ...
	I0804 00:16:09.480155   64502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:09.485704   64502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:09.491316   64502 pod_ready.go:92] pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:09.491340   64502 pod_ready.go:81] duration metric: took 5.611918ms for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:09.491348   64502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:08.514981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:08.531117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:08.531188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:08.569167   64758 cri.go:89] found id: ""
	I0804 00:16:08.569199   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.569210   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:08.569218   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:08.569282   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:08.608478   64758 cri.go:89] found id: ""
	I0804 00:16:08.608559   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.608572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:08.608580   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:08.608636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:08.645939   64758 cri.go:89] found id: ""
	I0804 00:16:08.645972   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.645983   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:08.645990   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:08.646050   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:08.685274   64758 cri.go:89] found id: ""
	I0804 00:16:08.685305   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.685316   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:08.685324   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:08.685400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:08.722314   64758 cri.go:89] found id: ""
	I0804 00:16:08.722345   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.722357   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:08.722363   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:08.722427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:08.758577   64758 cri.go:89] found id: ""
	I0804 00:16:08.758606   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.758617   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:08.758624   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:08.758685   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.798734   64758 cri.go:89] found id: ""
	I0804 00:16:08.798761   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.798773   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:08.798781   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:08.798842   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:08.837577   64758 cri.go:89] found id: ""
	I0804 00:16:08.837600   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.837608   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:08.837616   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:08.837627   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:08.894426   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:08.894465   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:08.909851   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:08.909879   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:08.989858   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:08.989878   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:08.989893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:09.081056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:09.081098   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:11.627914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:11.641805   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:11.641896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:11.679002   64758 cri.go:89] found id: ""
	I0804 00:16:11.679028   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.679036   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:11.679042   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:11.679090   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:11.720188   64758 cri.go:89] found id: ""
	I0804 00:16:11.720220   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.720236   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:11.720245   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:11.720307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:11.760085   64758 cri.go:89] found id: ""
	I0804 00:16:11.760118   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.760130   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:11.760138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:11.760198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:11.796220   64758 cri.go:89] found id: ""
	I0804 00:16:11.796249   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.796266   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:11.796274   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:11.796335   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:11.834216   64758 cri.go:89] found id: ""
	I0804 00:16:11.834243   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.834253   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:11.834260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:11.834336   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:11.869205   64758 cri.go:89] found id: ""
	I0804 00:16:11.869230   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.869237   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:11.869243   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:11.869301   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.110499   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:10.618011   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:08.827284   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:10.828942   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:11.498264   64502 pod_ready.go:102] pod "etcd-embed-certs-877598" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:12.498916   64502 pod_ready.go:92] pod "etcd-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:12.498949   64502 pod_ready.go:81] duration metric: took 3.007593153s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:12.498961   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.562862   64502 pod_ready.go:92] pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.562896   64502 pod_ready.go:81] duration metric: took 2.063926324s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.562910   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.573628   64502 pod_ready.go:92] pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.573655   64502 pod_ready.go:81] duration metric: took 10.735916ms for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.573670   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.583241   64502 pod_ready.go:92] pod "kube-proxy-wk8zf" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.583266   64502 pod_ready.go:81] duration metric: took 9.588875ms for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.583278   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.593419   64502 pod_ready.go:92] pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.593445   64502 pod_ready.go:81] duration metric: took 10.158665ms for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.593457   64502 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:11.912091   64758 cri.go:89] found id: ""
	I0804 00:16:11.912120   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.912132   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:11.912145   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:11.912203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:11.949570   64758 cri.go:89] found id: ""
	I0804 00:16:11.949603   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.949614   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:11.949625   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:11.949643   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:12.006542   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:12.006575   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:12.022435   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:12.022474   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:12.101007   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:12.101032   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:12.101057   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:12.183836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:12.183876   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:14.725345   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:14.738389   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:14.738464   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:14.780103   64758 cri.go:89] found id: ""
	I0804 00:16:14.780133   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.780142   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:14.780147   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:14.780197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:14.817811   64758 cri.go:89] found id: ""
	I0804 00:16:14.817847   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.817863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:14.817872   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:14.817946   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:14.854450   64758 cri.go:89] found id: ""
	I0804 00:16:14.854478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.854488   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:14.854495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:14.854561   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:14.891862   64758 cri.go:89] found id: ""
	I0804 00:16:14.891891   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.891900   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:14.891905   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:14.891958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:14.928450   64758 cri.go:89] found id: ""
	I0804 00:16:14.928478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.928488   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:14.928495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:14.928554   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:14.965820   64758 cri.go:89] found id: ""
	I0804 00:16:14.965848   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.965860   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:14.965867   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:14.965945   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:15.008725   64758 cri.go:89] found id: ""
	I0804 00:16:15.008874   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.008888   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:15.008897   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:15.008957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:15.044618   64758 cri.go:89] found id: ""
	I0804 00:16:15.044768   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.044792   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:15.044802   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:15.044815   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:15.102786   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:15.102825   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:15.118305   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:15.118347   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:15.196397   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:15.196420   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:15.196435   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:15.277941   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:15.277986   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:13.110969   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:15.112546   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:13.327840   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:15.826447   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:16.600315   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.099064   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:17.819354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:17.834271   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:17.834332   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:17.870930   64758 cri.go:89] found id: ""
	I0804 00:16:17.870961   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.870973   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:17.870980   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:17.871040   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:17.907980   64758 cri.go:89] found id: ""
	I0804 00:16:17.908007   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.908016   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:17.908021   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:17.908067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:17.943257   64758 cri.go:89] found id: ""
	I0804 00:16:17.943284   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.943295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:17.943301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:17.943363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:17.982297   64758 cri.go:89] found id: ""
	I0804 00:16:17.982328   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.982338   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:17.982345   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:17.982405   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:18.022780   64758 cri.go:89] found id: ""
	I0804 00:16:18.022810   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.022841   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:18.022850   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:18.022913   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:18.061891   64758 cri.go:89] found id: ""
	I0804 00:16:18.061926   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.061937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:18.061945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:18.062012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:18.100807   64758 cri.go:89] found id: ""
	I0804 00:16:18.100845   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.100855   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:18.100862   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:18.100917   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:18.142011   64758 cri.go:89] found id: ""
	I0804 00:16:18.142044   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.142056   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:18.142066   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:18.142090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:18.195476   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:18.195511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:18.209661   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:18.209690   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:18.282638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:18.282657   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:18.282669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:18.363900   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:18.363938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:20.908753   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:20.922878   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:20.922962   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:20.961013   64758 cri.go:89] found id: ""
	I0804 00:16:20.961041   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.961052   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:20.961058   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:20.961109   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:20.998027   64758 cri.go:89] found id: ""
	I0804 00:16:20.998059   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.998068   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:20.998074   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:20.998121   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:21.035640   64758 cri.go:89] found id: ""
	I0804 00:16:21.035669   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.035680   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:21.035688   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:21.035751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:21.075737   64758 cri.go:89] found id: ""
	I0804 00:16:21.075770   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.075779   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:21.075786   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:21.075846   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:21.120024   64758 cri.go:89] found id: ""
	I0804 00:16:21.120046   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.120054   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:21.120061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:21.120126   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:21.160796   64758 cri.go:89] found id: ""
	I0804 00:16:21.160821   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.160840   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:21.160847   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:21.160907   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:21.195519   64758 cri.go:89] found id: ""
	I0804 00:16:21.195547   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.195558   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:21.195566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:21.195629   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:21.236193   64758 cri.go:89] found id: ""
	I0804 00:16:21.236222   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.236232   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:21.236243   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:21.236258   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:21.295154   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:21.295198   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:21.309540   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:21.309566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:21.389391   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:21.389416   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:21.389433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:21.472771   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:21.472808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:17.611366   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.612092   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:17.827036   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.827655   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:21.828026   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:21.101899   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:23.601687   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.018923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:24.032954   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:24.033013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:24.073677   64758 cri.go:89] found id: ""
	I0804 00:16:24.073703   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.073711   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:24.073716   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:24.073777   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:24.115752   64758 cri.go:89] found id: ""
	I0804 00:16:24.115775   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.115785   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:24.115792   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:24.115849   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:24.152967   64758 cri.go:89] found id: ""
	I0804 00:16:24.153001   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.153017   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:24.153024   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:24.153098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:24.190557   64758 cri.go:89] found id: ""
	I0804 00:16:24.190581   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.190589   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:24.190595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:24.190643   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:24.229312   64758 cri.go:89] found id: ""
	I0804 00:16:24.229341   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.229351   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:24.229373   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:24.229437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:24.265076   64758 cri.go:89] found id: ""
	I0804 00:16:24.265100   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.265107   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:24.265113   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:24.265167   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:24.306508   64758 cri.go:89] found id: ""
	I0804 00:16:24.306534   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.306542   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:24.306547   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:24.306598   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:24.350714   64758 cri.go:89] found id: ""
	I0804 00:16:24.350747   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.350759   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:24.350770   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:24.350785   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:24.366188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:24.366216   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:24.438410   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:24.438431   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:24.438447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:24.522635   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:24.522669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:24.562647   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:24.562678   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:22.110420   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.111399   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.613839   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.327982   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.826914   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.099435   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:28.099896   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:30.100659   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:27.119437   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:27.133330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:27.133426   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:27.170001   64758 cri.go:89] found id: ""
	I0804 00:16:27.170039   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.170048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:27.170054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:27.170112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:27.205811   64758 cri.go:89] found id: ""
	I0804 00:16:27.205843   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.205854   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:27.205861   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:27.205922   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:27.247249   64758 cri.go:89] found id: ""
	I0804 00:16:27.247278   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.247287   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:27.247294   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:27.247360   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:27.285659   64758 cri.go:89] found id: ""
	I0804 00:16:27.285688   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.285697   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:27.285703   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:27.285774   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:27.321039   64758 cri.go:89] found id: ""
	I0804 00:16:27.321066   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.321075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:27.321084   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:27.321130   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:27.359947   64758 cri.go:89] found id: ""
	I0804 00:16:27.359977   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.359988   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:27.359996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:27.360056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:27.401408   64758 cri.go:89] found id: ""
	I0804 00:16:27.401432   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.401440   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:27.401449   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:27.401495   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:27.437297   64758 cri.go:89] found id: ""
	I0804 00:16:27.437326   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.437337   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:27.437347   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:27.437373   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:27.490594   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:27.490639   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:27.505993   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:27.506021   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:27.588779   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:27.588804   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:27.588820   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:27.681557   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:27.681592   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.225062   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:30.239475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:30.239540   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:30.283896   64758 cri.go:89] found id: ""
	I0804 00:16:30.283923   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.283931   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:30.283938   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:30.284013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:30.321506   64758 cri.go:89] found id: ""
	I0804 00:16:30.321532   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.321539   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:30.321545   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:30.321593   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:30.358314   64758 cri.go:89] found id: ""
	I0804 00:16:30.358340   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.358347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:30.358353   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:30.358400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:30.393561   64758 cri.go:89] found id: ""
	I0804 00:16:30.393587   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.393595   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:30.393600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:30.393646   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:30.429907   64758 cri.go:89] found id: ""
	I0804 00:16:30.429935   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.429943   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:30.429949   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:30.430008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:30.466305   64758 cri.go:89] found id: ""
	I0804 00:16:30.466332   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.466342   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:30.466350   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:30.466408   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:30.505384   64758 cri.go:89] found id: ""
	I0804 00:16:30.505413   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.505424   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:30.505431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:30.505492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:30.541756   64758 cri.go:89] found id: ""
	I0804 00:16:30.541786   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.541796   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:30.541806   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:30.541821   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:30.555516   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:30.555554   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:30.627442   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:30.627463   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:30.627473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:30.701452   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:30.701489   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.743436   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:30.743473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:29.111149   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:31.111470   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:29.327268   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:31.328424   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:32.605884   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:34.608119   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:33.298898   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:33.315211   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:33.315292   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:33.353171   64758 cri.go:89] found id: ""
	I0804 00:16:33.353207   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.353220   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:33.353229   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:33.353297   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:33.389767   64758 cri.go:89] found id: ""
	I0804 00:16:33.389792   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.389799   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:33.389805   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:33.389851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:33.446889   64758 cri.go:89] found id: ""
	I0804 00:16:33.446928   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.446939   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:33.446946   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:33.447004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:33.487340   64758 cri.go:89] found id: ""
	I0804 00:16:33.487362   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.487370   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:33.487376   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:33.487423   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:33.530398   64758 cri.go:89] found id: ""
	I0804 00:16:33.530421   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.530429   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:33.530435   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:33.530483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:33.568725   64758 cri.go:89] found id: ""
	I0804 00:16:33.568753   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.568762   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:33.568769   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:33.568818   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:33.607205   64758 cri.go:89] found id: ""
	I0804 00:16:33.607232   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.607242   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:33.607249   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:33.607311   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:33.648188   64758 cri.go:89] found id: ""
	I0804 00:16:33.648220   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.648230   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:33.648240   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:33.648256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:33.700231   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:33.700266   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:33.714899   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:33.714932   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:33.794306   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:33.794326   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:33.794340   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.872446   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:33.872482   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.415000   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:36.428920   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:36.428996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:36.464784   64758 cri.go:89] found id: ""
	I0804 00:16:36.464810   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.464817   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:36.464823   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:36.464925   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:36.501394   64758 cri.go:89] found id: ""
	I0804 00:16:36.501423   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.501431   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:36.501437   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:36.501497   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:36.537049   64758 cri.go:89] found id: ""
	I0804 00:16:36.537079   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.537090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:36.537102   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:36.537173   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:36.573956   64758 cri.go:89] found id: ""
	I0804 00:16:36.573986   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.573997   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:36.574004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:36.574065   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:36.612996   64758 cri.go:89] found id: ""
	I0804 00:16:36.613016   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.613023   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:36.613029   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:36.613083   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:36.652346   64758 cri.go:89] found id: ""
	I0804 00:16:36.652367   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.652374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:36.652380   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:36.652437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:36.690073   64758 cri.go:89] found id: ""
	I0804 00:16:36.690100   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.690110   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:36.690119   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:36.690182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:36.732436   64758 cri.go:89] found id: ""
	I0804 00:16:36.732466   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.732477   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:36.732487   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:36.732505   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:36.746036   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:36.746060   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:36.818141   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:36.818164   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:36.818179   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.611181   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:35.611691   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:33.329719   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:35.330172   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:37.100705   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:39.603600   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:36.907689   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:36.907732   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.947104   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:36.947135   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.502960   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:39.516340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:39.516414   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:39.555903   64758 cri.go:89] found id: ""
	I0804 00:16:39.555929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.555939   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:39.555946   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:39.556004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:39.599791   64758 cri.go:89] found id: ""
	I0804 00:16:39.599816   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.599827   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:39.599834   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:39.599894   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:39.642903   64758 cri.go:89] found id: ""
	I0804 00:16:39.642929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.642936   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:39.642944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:39.643004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:39.678667   64758 cri.go:89] found id: ""
	I0804 00:16:39.678693   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.678702   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:39.678709   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:39.678757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:39.716888   64758 cri.go:89] found id: ""
	I0804 00:16:39.716916   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.716926   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:39.716933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:39.717001   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:39.751576   64758 cri.go:89] found id: ""
	I0804 00:16:39.751602   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.751610   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:39.751616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:39.751664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:39.794026   64758 cri.go:89] found id: ""
	I0804 00:16:39.794056   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.794067   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:39.794087   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:39.794158   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:39.841426   64758 cri.go:89] found id: ""
	I0804 00:16:39.841454   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.841464   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:39.841474   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:39.841492   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.902579   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:39.902616   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:39.924467   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:39.924495   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:40.001318   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:40.001345   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:40.001377   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:40.081520   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:40.081552   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:38.111443   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:40.610810   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:37.827851   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:39.828752   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.327716   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.100037   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:44.100850   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.623094   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:42.636523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:42.636594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:42.674188   64758 cri.go:89] found id: ""
	I0804 00:16:42.674218   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.674226   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:42.674231   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:42.674277   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:42.708496   64758 cri.go:89] found id: ""
	I0804 00:16:42.708522   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.708532   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:42.708539   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:42.708601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:42.751050   64758 cri.go:89] found id: ""
	I0804 00:16:42.751087   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.751100   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:42.751107   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:42.751170   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:42.788520   64758 cri.go:89] found id: ""
	I0804 00:16:42.788546   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.788555   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:42.788560   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:42.788619   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:42.828273   64758 cri.go:89] found id: ""
	I0804 00:16:42.828297   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.828304   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:42.828309   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:42.828356   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:42.867754   64758 cri.go:89] found id: ""
	I0804 00:16:42.867784   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.867799   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:42.867807   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:42.867864   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:42.903945   64758 cri.go:89] found id: ""
	I0804 00:16:42.903977   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.903988   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:42.903996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:42.904059   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:42.942477   64758 cri.go:89] found id: ""
	I0804 00:16:42.942518   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.942539   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:42.942549   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:42.942565   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:42.981776   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:42.981810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:43.037601   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:43.037634   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:43.052719   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:43.052746   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:43.122664   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:43.122688   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:43.122702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:45.701275   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:45.714532   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:45.714607   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:45.750932   64758 cri.go:89] found id: ""
	I0804 00:16:45.750955   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.750986   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:45.750991   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:45.751042   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:45.787348   64758 cri.go:89] found id: ""
	I0804 00:16:45.787373   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.787381   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:45.787387   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:45.787441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:45.823390   64758 cri.go:89] found id: ""
	I0804 00:16:45.823419   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.823429   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:45.823436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:45.823498   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:45.861400   64758 cri.go:89] found id: ""
	I0804 00:16:45.861430   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.861440   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:45.861448   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:45.861508   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:45.898992   64758 cri.go:89] found id: ""
	I0804 00:16:45.899024   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.899036   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:45.899043   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:45.899110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:45.934542   64758 cri.go:89] found id: ""
	I0804 00:16:45.934570   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.934582   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:45.934589   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:45.934648   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:45.967908   64758 cri.go:89] found id: ""
	I0804 00:16:45.967938   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.967949   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:45.967957   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:45.968018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:46.006475   64758 cri.go:89] found id: ""
	I0804 00:16:46.006504   64758 logs.go:276] 0 containers: []
	W0804 00:16:46.006516   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:46.006526   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:46.006541   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:46.058760   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:46.058793   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:46.074753   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:46.074777   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:46.149634   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:46.149655   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:46.149671   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:46.230104   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:46.230140   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:43.111492   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:45.611224   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:44.827683   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:47.326999   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:46.600307   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:49.100532   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:48.772224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:48.785848   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:48.785935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.825206   64758 cri.go:89] found id: ""
	I0804 00:16:48.825232   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.825242   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:48.825249   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:48.825315   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:48.861559   64758 cri.go:89] found id: ""
	I0804 00:16:48.861588   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.861599   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:48.861607   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:48.861675   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:48.903375   64758 cri.go:89] found id: ""
	I0804 00:16:48.903401   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.903412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:48.903419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:48.903480   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:48.940708   64758 cri.go:89] found id: ""
	I0804 00:16:48.940736   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.940748   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:48.940755   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:48.940817   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:48.976190   64758 cri.go:89] found id: ""
	I0804 00:16:48.976218   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.976228   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:48.976236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:48.976291   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:49.010393   64758 cri.go:89] found id: ""
	I0804 00:16:49.010423   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.010434   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:49.010442   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:49.010506   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:49.046670   64758 cri.go:89] found id: ""
	I0804 00:16:49.046698   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.046707   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:49.046711   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:49.046759   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:49.085254   64758 cri.go:89] found id: ""
	I0804 00:16:49.085284   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.085293   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:49.085302   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:49.085314   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:49.142402   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:49.142433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:49.157063   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:49.157092   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:49.233808   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:49.233829   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:49.233841   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:49.320355   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:49.320395   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:51.862548   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:51.875679   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:51.875750   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.110954   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:50.111867   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:49.327109   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.327920   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.600258   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:53.601052   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.911400   64758 cri.go:89] found id: ""
	I0804 00:16:51.911427   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.911437   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:51.911444   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:51.911505   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:51.948825   64758 cri.go:89] found id: ""
	I0804 00:16:51.948853   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.948863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:51.948870   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:51.948935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:51.989458   64758 cri.go:89] found id: ""
	I0804 00:16:51.989488   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.989499   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:51.989506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:51.989568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:52.026663   64758 cri.go:89] found id: ""
	I0804 00:16:52.026685   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.026693   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:52.026698   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:52.026754   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:52.066089   64758 cri.go:89] found id: ""
	I0804 00:16:52.066115   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.066127   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:52.066135   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:52.066198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:52.102159   64758 cri.go:89] found id: ""
	I0804 00:16:52.102185   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.102196   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:52.102203   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:52.102258   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:52.144239   64758 cri.go:89] found id: ""
	I0804 00:16:52.144266   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.144276   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:52.144283   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:52.144344   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:52.180679   64758 cri.go:89] found id: ""
	I0804 00:16:52.180708   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.180717   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:52.180725   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:52.180738   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:52.262074   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:52.262116   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.305913   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:52.305948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:52.357044   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:52.357081   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:52.372090   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:52.372119   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:52.444148   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:54.944910   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:54.958182   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:54.958239   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:54.993629   64758 cri.go:89] found id: ""
	I0804 00:16:54.993657   64758 logs.go:276] 0 containers: []
	W0804 00:16:54.993668   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:54.993675   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:54.993734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:55.029270   64758 cri.go:89] found id: ""
	I0804 00:16:55.029299   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.029310   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:55.029317   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:55.029393   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:55.067923   64758 cri.go:89] found id: ""
	I0804 00:16:55.067951   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.067961   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:55.067968   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:55.068027   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:55.107533   64758 cri.go:89] found id: ""
	I0804 00:16:55.107556   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.107565   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:55.107572   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:55.107633   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:55.143828   64758 cri.go:89] found id: ""
	I0804 00:16:55.143856   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.143868   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:55.143875   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:55.143940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:55.177960   64758 cri.go:89] found id: ""
	I0804 00:16:55.178015   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.178030   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:55.178038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:55.178112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:55.217457   64758 cri.go:89] found id: ""
	I0804 00:16:55.217481   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.217488   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:55.217494   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:55.217538   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:55.259862   64758 cri.go:89] found id: ""
	I0804 00:16:55.259890   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.259898   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:55.259907   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:55.259918   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:55.311566   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:55.311598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:55.327833   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:55.327866   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:55.406475   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:55.406495   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:55.406511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:55.484586   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:55.484618   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.610982   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:54.611276   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:56.611515   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:53.827394   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:55.827945   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:56.099238   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.100223   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:00.599870   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.028251   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:58.042169   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:58.042236   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:58.076836   64758 cri.go:89] found id: ""
	I0804 00:16:58.076859   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.076868   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:58.076873   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:58.076937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:58.115989   64758 cri.go:89] found id: ""
	I0804 00:16:58.116019   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.116031   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:58.116037   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:58.116099   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:58.155049   64758 cri.go:89] found id: ""
	I0804 00:16:58.155079   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.155090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:58.155097   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:58.155160   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:58.190257   64758 cri.go:89] found id: ""
	I0804 00:16:58.190293   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.190305   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:58.190315   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:58.190370   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:58.225001   64758 cri.go:89] found id: ""
	I0804 00:16:58.225029   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.225038   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:58.225061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:58.225118   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:58.268881   64758 cri.go:89] found id: ""
	I0804 00:16:58.268925   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.268937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:58.268945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:58.269010   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:58.305223   64758 cri.go:89] found id: ""
	I0804 00:16:58.305253   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.305269   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:58.305277   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:58.305340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:58.340517   64758 cri.go:89] found id: ""
	I0804 00:16:58.340548   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.340559   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:58.340570   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:58.340584   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:58.355372   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:58.355403   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:58.426292   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:58.426312   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:58.426326   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:58.509990   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:58.510034   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.550957   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:58.550988   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.104806   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:01.119379   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:01.119453   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:01.158376   64758 cri.go:89] found id: ""
	I0804 00:17:01.158407   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.158419   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:01.158426   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:01.158484   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:01.193826   64758 cri.go:89] found id: ""
	I0804 00:17:01.193858   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.193869   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:01.193876   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:01.193937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:01.228566   64758 cri.go:89] found id: ""
	I0804 00:17:01.228588   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.228600   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:01.228607   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:01.228667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:01.265736   64758 cri.go:89] found id: ""
	I0804 00:17:01.265762   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.265772   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:01.265778   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:01.265834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:01.302655   64758 cri.go:89] found id: ""
	I0804 00:17:01.302679   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.302694   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:01.302699   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:01.302753   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:01.340191   64758 cri.go:89] found id: ""
	I0804 00:17:01.340218   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.340226   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:01.340236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:01.340294   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:01.375767   64758 cri.go:89] found id: ""
	I0804 00:17:01.375789   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.375797   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:01.375802   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:01.375875   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:01.412446   64758 cri.go:89] found id: ""
	I0804 00:17:01.412479   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.412490   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:01.412502   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:01.412518   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.466271   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:01.466309   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:01.480800   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:01.480838   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:01.547909   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:01.547932   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:01.547948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:01.628318   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:01.628351   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.611854   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:01.111626   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.326831   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:00.327154   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:02.328038   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:02.601960   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:05.099489   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:04.175883   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:04.189038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:04.189098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:04.229126   64758 cri.go:89] found id: ""
	I0804 00:17:04.229158   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.229167   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:04.229174   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:04.229235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:04.264107   64758 cri.go:89] found id: ""
	I0804 00:17:04.264134   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.264142   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:04.264147   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:04.264203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:04.299959   64758 cri.go:89] found id: ""
	I0804 00:17:04.299996   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.300004   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:04.300010   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:04.300056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:04.337978   64758 cri.go:89] found id: ""
	I0804 00:17:04.338006   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.338016   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:04.338023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:04.338081   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:04.377969   64758 cri.go:89] found id: ""
	I0804 00:17:04.377993   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.378001   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:04.378006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:04.378068   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:04.413036   64758 cri.go:89] found id: ""
	I0804 00:17:04.413062   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.413071   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:04.413078   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:04.413140   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:04.450387   64758 cri.go:89] found id: ""
	I0804 00:17:04.450417   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.450426   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:04.450431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:04.450488   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:04.490132   64758 cri.go:89] found id: ""
	I0804 00:17:04.490165   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.490177   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:04.490188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:04.490204   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:04.560633   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:04.560653   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:04.560668   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:04.639409   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:04.639445   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:04.682479   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:04.682512   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:04.734823   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:04.734857   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:03.112357   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:05.610907   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:04.828050   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.327249   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.099893   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:09.100093   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.250174   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:07.263523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:07.263599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:07.300095   64758 cri.go:89] found id: ""
	I0804 00:17:07.300124   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.300136   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:07.300144   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:07.300211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:07.337798   64758 cri.go:89] found id: ""
	I0804 00:17:07.337824   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.337846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:07.337851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:07.337902   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:07.375305   64758 cri.go:89] found id: ""
	I0804 00:17:07.375337   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.375348   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:07.375356   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:07.375406   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:07.411603   64758 cri.go:89] found id: ""
	I0804 00:17:07.411629   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.411639   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:07.411646   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:07.411704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:07.450478   64758 cri.go:89] found id: ""
	I0804 00:17:07.450502   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.450511   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:07.450518   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:07.450564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:07.489972   64758 cri.go:89] found id: ""
	I0804 00:17:07.489997   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.490006   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:07.490012   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:07.490073   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:07.523685   64758 cri.go:89] found id: ""
	I0804 00:17:07.523713   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.523725   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:07.523732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:07.523789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:07.562636   64758 cri.go:89] found id: ""
	I0804 00:17:07.562665   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.562675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:07.562686   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:07.562702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:07.647968   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:07.648004   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.689829   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:07.689856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:07.738333   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:07.738366   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:07.753419   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:07.753448   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:07.829678   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.329981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:10.343676   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:10.343743   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:10.379546   64758 cri.go:89] found id: ""
	I0804 00:17:10.379575   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.379586   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:10.379594   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:10.379657   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:10.416247   64758 cri.go:89] found id: ""
	I0804 00:17:10.416271   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.416279   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:10.416284   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:10.416340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:10.455261   64758 cri.go:89] found id: ""
	I0804 00:17:10.455291   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.455303   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:10.455310   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:10.455373   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:10.493220   64758 cri.go:89] found id: ""
	I0804 00:17:10.493251   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.493262   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:10.493270   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:10.493329   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:10.538682   64758 cri.go:89] found id: ""
	I0804 00:17:10.538709   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.538720   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:10.538727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:10.538787   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:10.575509   64758 cri.go:89] found id: ""
	I0804 00:17:10.575535   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.575546   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:10.575553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:10.575609   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:10.613163   64758 cri.go:89] found id: ""
	I0804 00:17:10.613188   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.613196   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:10.613201   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:10.613260   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:10.648914   64758 cri.go:89] found id: ""
	I0804 00:17:10.648940   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.648947   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:10.648956   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:10.648968   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:10.700151   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:10.700187   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:10.714971   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:10.714998   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:10.787679   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.787698   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:10.787710   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:10.865008   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:10.865048   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.611770   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:10.110299   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:09.327569   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:11.327855   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:11.603427   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:14.100524   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:13.406150   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:13.419602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:13.419659   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:13.456823   64758 cri.go:89] found id: ""
	I0804 00:17:13.456852   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.456863   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:13.456870   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:13.456935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:13.493527   64758 cri.go:89] found id: ""
	I0804 00:17:13.493556   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.493567   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:13.493574   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:13.493697   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:13.529745   64758 cri.go:89] found id: ""
	I0804 00:17:13.529770   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.529784   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:13.529790   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:13.529856   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:13.567775   64758 cri.go:89] found id: ""
	I0804 00:17:13.567811   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.567819   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:13.567824   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:13.567888   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:13.604638   64758 cri.go:89] found id: ""
	I0804 00:17:13.604670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.604678   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:13.604685   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:13.604741   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:13.646638   64758 cri.go:89] found id: ""
	I0804 00:17:13.646670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.646679   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:13.646684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:13.646730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:13.694656   64758 cri.go:89] found id: ""
	I0804 00:17:13.694682   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.694693   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:13.694701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:13.694761   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:13.733738   64758 cri.go:89] found id: ""
	I0804 00:17:13.733762   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.733771   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:13.733780   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:13.733792   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:13.749747   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:13.749775   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:13.832826   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:13.832852   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:13.832868   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:13.914198   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:13.914233   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:13.952753   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:13.952787   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.503600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:16.516932   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:16.517004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:16.552012   64758 cri.go:89] found id: ""
	I0804 00:17:16.552037   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.552046   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:16.552052   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:16.552110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:16.590626   64758 cri.go:89] found id: ""
	I0804 00:17:16.590653   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.590660   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:16.590666   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:16.590732   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:16.628684   64758 cri.go:89] found id: ""
	I0804 00:17:16.628712   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.628723   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:16.628729   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:16.628792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:16.664934   64758 cri.go:89] found id: ""
	I0804 00:17:16.664969   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.664980   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:16.664987   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:16.665054   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:16.700098   64758 cri.go:89] found id: ""
	I0804 00:17:16.700127   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.700138   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:16.700144   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:16.700214   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:16.736761   64758 cri.go:89] found id: ""
	I0804 00:17:16.736786   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.736795   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:16.736800   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:16.736863   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:16.780010   64758 cri.go:89] found id: ""
	I0804 00:17:16.780033   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.780045   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:16.780050   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:16.780106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:16.816079   64758 cri.go:89] found id: ""
	I0804 00:17:16.816103   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.816112   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:16.816122   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:16.816136   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.866526   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:16.866560   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:16.881254   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:16.881287   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:17:12.610907   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:14.610978   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.611860   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:13.827860   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.327167   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.601482   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:19.100152   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	W0804 00:17:16.952491   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:16.952515   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:16.952530   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:17.038943   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:17.038977   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:19.580078   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:19.595538   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:19.595601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:19.632206   64758 cri.go:89] found id: ""
	I0804 00:17:19.632234   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.632245   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:19.632252   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:19.632307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:19.670335   64758 cri.go:89] found id: ""
	I0804 00:17:19.670362   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.670377   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:19.670388   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:19.670447   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:19.707772   64758 cri.go:89] found id: ""
	I0804 00:17:19.707801   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.707812   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:19.707818   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:19.707877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:19.743822   64758 cri.go:89] found id: ""
	I0804 00:17:19.743855   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.743867   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:19.743874   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:19.743930   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:19.781592   64758 cri.go:89] found id: ""
	I0804 00:17:19.781622   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.781632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:19.781640   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:19.781698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:19.818792   64758 cri.go:89] found id: ""
	I0804 00:17:19.818815   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.818823   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:19.818829   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:19.818877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:19.856486   64758 cri.go:89] found id: ""
	I0804 00:17:19.856511   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.856522   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:19.856528   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:19.856586   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:19.901721   64758 cri.go:89] found id: ""
	I0804 00:17:19.901743   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.901754   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:19.901764   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:19.901780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:19.980095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:19.980119   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:19.980134   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:20.072699   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:20.072750   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:20.159007   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:20.159038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:20.211785   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:20.211818   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:19.110218   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:21.110572   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:18.828527   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:20.828554   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:21.600968   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:23.602526   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.603220   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:22.727235   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:22.740922   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:22.740996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:22.780356   64758 cri.go:89] found id: ""
	I0804 00:17:22.780381   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.780392   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:22.780400   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:22.780459   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:22.817075   64758 cri.go:89] found id: ""
	I0804 00:17:22.817100   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.817111   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:22.817119   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:22.817182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:22.857213   64758 cri.go:89] found id: ""
	I0804 00:17:22.857243   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.857253   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:22.857260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:22.857325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:22.894049   64758 cri.go:89] found id: ""
	I0804 00:17:22.894085   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.894096   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:22.894104   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:22.894171   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:22.929718   64758 cri.go:89] found id: ""
	I0804 00:17:22.929746   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.929756   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:22.929770   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:22.929843   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:22.964863   64758 cri.go:89] found id: ""
	I0804 00:17:22.964892   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.964901   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:22.964907   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:22.964958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:23.002565   64758 cri.go:89] found id: ""
	I0804 00:17:23.002593   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.002603   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:23.002611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:23.002676   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:23.038161   64758 cri.go:89] found id: ""
	I0804 00:17:23.038188   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.038199   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:23.038211   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:23.038224   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:23.091865   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:23.091903   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:23.108358   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:23.108388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:23.186417   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:23.186438   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:23.186453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.269119   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:23.269161   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:25.812405   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:25.833174   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:25.833253   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:25.881654   64758 cri.go:89] found id: ""
	I0804 00:17:25.881681   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.881690   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:25.881696   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:25.881757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:25.936968   64758 cri.go:89] found id: ""
	I0804 00:17:25.936997   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.937006   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:25.937011   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:25.937066   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:25.972437   64758 cri.go:89] found id: ""
	I0804 00:17:25.972462   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.972470   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:25.972475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:25.972529   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:26.008306   64758 cri.go:89] found id: ""
	I0804 00:17:26.008346   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.008357   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:26.008366   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:26.008435   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:26.045593   64758 cri.go:89] found id: ""
	I0804 00:17:26.045620   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.045632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:26.045639   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:26.045696   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:26.084170   64758 cri.go:89] found id: ""
	I0804 00:17:26.084195   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.084205   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:26.084212   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:26.084272   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:26.122524   64758 cri.go:89] found id: ""
	I0804 00:17:26.122551   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.122559   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:26.122565   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:26.122623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:26.159264   64758 cri.go:89] found id: ""
	I0804 00:17:26.159297   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.159308   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:26.159320   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:26.159337   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:26.205692   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:26.205718   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:26.257286   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:26.257321   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:26.271582   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:26.271611   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:26.344562   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:26.344586   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:26.344598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.112800   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.610507   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:23.327294   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.828519   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:28.100160   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:30.100351   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:28.929410   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:28.943941   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:28.944003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:28.986127   64758 cri.go:89] found id: ""
	I0804 00:17:28.986157   64758 logs.go:276] 0 containers: []
	W0804 00:17:28.986169   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:28.986176   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:28.986237   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:29.025528   64758 cri.go:89] found id: ""
	I0804 00:17:29.025556   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.025564   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:29.025570   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:29.025624   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:29.059525   64758 cri.go:89] found id: ""
	I0804 00:17:29.059553   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.059561   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:29.059566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:29.059614   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:29.097451   64758 cri.go:89] found id: ""
	I0804 00:17:29.097489   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.097499   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:29.097506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:29.097564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:29.135504   64758 cri.go:89] found id: ""
	I0804 00:17:29.135532   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.135540   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:29.135546   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:29.135601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:29.175277   64758 cri.go:89] found id: ""
	I0804 00:17:29.175314   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.175324   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:29.175332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:29.175391   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:29.210275   64758 cri.go:89] found id: ""
	I0804 00:17:29.210303   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.210314   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:29.210321   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:29.210382   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:29.246138   64758 cri.go:89] found id: ""
	I0804 00:17:29.246174   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.246186   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:29.246196   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:29.246213   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:29.298935   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:29.298971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:29.313342   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:29.313388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:29.384609   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:29.384635   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:29.384650   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:29.461759   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:29.461795   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:27.611021   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:29.612149   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:27.831367   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:30.327878   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.328772   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.101073   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.600832   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.010152   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:32.023609   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:32.023677   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:32.062480   64758 cri.go:89] found id: ""
	I0804 00:17:32.062508   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.062517   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:32.062523   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:32.062590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:32.099601   64758 cri.go:89] found id: ""
	I0804 00:17:32.099627   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.099634   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:32.099640   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:32.099691   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:32.138651   64758 cri.go:89] found id: ""
	I0804 00:17:32.138680   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.138689   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:32.138694   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:32.138751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:32.182224   64758 cri.go:89] found id: ""
	I0804 00:17:32.182249   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.182257   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:32.182264   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:32.182318   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:32.224381   64758 cri.go:89] found id: ""
	I0804 00:17:32.224410   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.224421   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:32.224429   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:32.224486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:32.261569   64758 cri.go:89] found id: ""
	I0804 00:17:32.261600   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.261609   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:32.261615   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:32.261663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:32.304769   64758 cri.go:89] found id: ""
	I0804 00:17:32.304793   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.304807   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:32.304814   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:32.304867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:32.348695   64758 cri.go:89] found id: ""
	I0804 00:17:32.348727   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.348736   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:32.348745   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:32.348757   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.389444   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:32.389473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:32.442901   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:32.442938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:32.457562   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:32.457588   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:32.529121   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:32.529144   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:32.529160   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.114712   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:35.129725   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:35.129795   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:35.167226   64758 cri.go:89] found id: ""
	I0804 00:17:35.167248   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.167257   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:35.167262   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:35.167310   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:35.200889   64758 cri.go:89] found id: ""
	I0804 00:17:35.200914   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.200922   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:35.200927   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:35.201000   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:35.234899   64758 cri.go:89] found id: ""
	I0804 00:17:35.234927   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.234938   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:35.234945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:35.235003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:35.271355   64758 cri.go:89] found id: ""
	I0804 00:17:35.271393   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.271405   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:35.271412   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:35.271471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:35.313557   64758 cri.go:89] found id: ""
	I0804 00:17:35.313585   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.313595   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:35.313602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:35.313663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:35.352931   64758 cri.go:89] found id: ""
	I0804 00:17:35.352960   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.352971   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:35.352979   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:35.353046   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:35.391202   64758 cri.go:89] found id: ""
	I0804 00:17:35.391232   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.391256   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:35.391263   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:35.391337   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:35.427599   64758 cri.go:89] found id: ""
	I0804 00:17:35.427627   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.427638   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:35.427649   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:35.427666   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:35.482025   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:35.482061   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:35.498274   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:35.498303   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:35.572606   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:35.572631   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:35.572644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.655534   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:35.655566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.114835   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.610785   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.827077   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:36.827108   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:36.601588   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:38.602210   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:40.602295   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:38.205756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:38.218974   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:38.219044   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:38.253798   64758 cri.go:89] found id: ""
	I0804 00:17:38.253827   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.253839   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:38.253852   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:38.253911   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:38.291074   64758 cri.go:89] found id: ""
	I0804 00:17:38.291102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.291113   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:38.291120   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:38.291182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:38.332097   64758 cri.go:89] found id: ""
	I0804 00:17:38.332123   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.332133   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:38.332140   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:38.332198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:38.370074   64758 cri.go:89] found id: ""
	I0804 00:17:38.370102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.370110   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:38.370117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:38.370176   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:38.406962   64758 cri.go:89] found id: ""
	I0804 00:17:38.406984   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.406993   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:38.406998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:38.407051   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:38.447532   64758 cri.go:89] found id: ""
	I0804 00:17:38.447562   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.447572   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:38.447579   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:38.447653   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:38.484326   64758 cri.go:89] found id: ""
	I0804 00:17:38.484356   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.484368   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:38.484375   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:38.484444   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:38.521831   64758 cri.go:89] found id: ""
	I0804 00:17:38.521858   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.521869   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:38.521880   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:38.521893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:38.570540   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:38.570569   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:38.624921   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:38.624953   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:38.639451   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:38.639477   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:38.714435   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:38.714459   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:38.714475   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:41.295160   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:41.310032   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:41.310108   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:41.350363   64758 cri.go:89] found id: ""
	I0804 00:17:41.350393   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.350404   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:41.350412   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:41.350475   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:41.391662   64758 cri.go:89] found id: ""
	I0804 00:17:41.391691   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.391698   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:41.391703   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:41.391760   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:41.429653   64758 cri.go:89] found id: ""
	I0804 00:17:41.429678   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.429686   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:41.429692   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:41.429739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:41.469456   64758 cri.go:89] found id: ""
	I0804 00:17:41.469483   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.469494   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:41.469505   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:41.469566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:41.506124   64758 cri.go:89] found id: ""
	I0804 00:17:41.506154   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.506164   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:41.506171   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:41.506234   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:41.543139   64758 cri.go:89] found id: ""
	I0804 00:17:41.543171   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.543182   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:41.543190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:41.543252   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:41.580537   64758 cri.go:89] found id: ""
	I0804 00:17:41.580568   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.580578   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:41.580585   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:41.580652   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:41.619828   64758 cri.go:89] found id: ""
	I0804 00:17:41.619854   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.619862   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:41.619869   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:41.619882   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:41.660749   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:41.660780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:41.712889   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:41.712924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:41.726422   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:41.726447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:41.805673   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:41.805697   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:41.805712   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:37.110193   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:39.110927   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:41.111203   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:39.327800   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:41.327910   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:43.099815   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:45.101262   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:44.386563   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:44.399891   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:44.399954   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:44.434270   64758 cri.go:89] found id: ""
	I0804 00:17:44.434297   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.434305   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:44.434311   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:44.434372   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:44.469423   64758 cri.go:89] found id: ""
	I0804 00:17:44.469454   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.469463   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:44.469468   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:44.469535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:44.505511   64758 cri.go:89] found id: ""
	I0804 00:17:44.505539   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.505547   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:44.505553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:44.505602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:44.540897   64758 cri.go:89] found id: ""
	I0804 00:17:44.540922   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.540932   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:44.540937   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:44.540996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:44.578722   64758 cri.go:89] found id: ""
	I0804 00:17:44.578747   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.578755   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:44.578760   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:44.578812   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:44.615838   64758 cri.go:89] found id: ""
	I0804 00:17:44.615863   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.615874   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:44.615881   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:44.615940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:44.657695   64758 cri.go:89] found id: ""
	I0804 00:17:44.657724   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.657734   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:44.657741   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:44.657916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:44.695852   64758 cri.go:89] found id: ""
	I0804 00:17:44.695882   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.695892   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:44.695901   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:44.695912   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:44.754643   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:44.754687   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:44.773964   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:44.773994   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:44.857544   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:44.857567   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:44.857583   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:44.952987   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:44.953027   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:43.610772   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:45.611480   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:43.827218   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:46.327323   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:47.600755   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.099574   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:47.504957   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:47.520153   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:47.520232   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:47.557303   64758 cri.go:89] found id: ""
	I0804 00:17:47.557326   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.557334   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:47.557339   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:47.557410   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:47.595626   64758 cri.go:89] found id: ""
	I0804 00:17:47.595655   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.595665   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:47.595675   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:47.595733   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:47.633430   64758 cri.go:89] found id: ""
	I0804 00:17:47.633458   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.633466   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:47.633472   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:47.633525   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:47.670116   64758 cri.go:89] found id: ""
	I0804 00:17:47.670140   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.670149   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:47.670154   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:47.670200   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:47.709019   64758 cri.go:89] found id: ""
	I0804 00:17:47.709042   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.709050   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:47.709055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:47.709111   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:47.745230   64758 cri.go:89] found id: ""
	I0804 00:17:47.745251   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.745259   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:47.745265   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:47.745319   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:47.787957   64758 cri.go:89] found id: ""
	I0804 00:17:47.787985   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.787996   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:47.788004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:47.788063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:47.821451   64758 cri.go:89] found id: ""
	I0804 00:17:47.821477   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.821488   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:47.821498   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:47.821516   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:47.903035   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:47.903139   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:47.903162   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:47.986659   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:47.986702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:48.037921   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:48.037951   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:48.095354   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:48.095389   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:50.613264   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:50.627717   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:50.627792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:50.669311   64758 cri.go:89] found id: ""
	I0804 00:17:50.669338   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.669347   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:50.669370   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:50.669438   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:50.714674   64758 cri.go:89] found id: ""
	I0804 00:17:50.714704   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.714713   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:50.714718   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:50.714769   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:50.755291   64758 cri.go:89] found id: ""
	I0804 00:17:50.755318   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.755326   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:50.755332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:50.755394   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:50.801927   64758 cri.go:89] found id: ""
	I0804 00:17:50.801955   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.801964   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:50.801970   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:50.802020   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:50.845096   64758 cri.go:89] found id: ""
	I0804 00:17:50.845121   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.845130   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:50.845136   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:50.845193   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:50.882664   64758 cri.go:89] found id: ""
	I0804 00:17:50.882694   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.882705   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:50.882712   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:50.882771   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:50.921233   64758 cri.go:89] found id: ""
	I0804 00:17:50.921260   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.921268   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:50.921273   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:50.921326   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:50.955254   64758 cri.go:89] found id: ""
	I0804 00:17:50.955286   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.955298   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:50.955311   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:50.955329   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:51.010001   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:51.010037   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:51.024943   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:51.024966   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:51.096095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:51.096123   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:51.096139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:51.177829   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:51.177864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:47.611778   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.110408   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:48.328693   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.828022   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:52.609609   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:55.100616   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:53.720665   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:53.736318   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:53.736380   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:53.772887   64758 cri.go:89] found id: ""
	I0804 00:17:53.772916   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.772926   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:53.772934   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:53.772995   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:53.811771   64758 cri.go:89] found id: ""
	I0804 00:17:53.811797   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.811837   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:53.811845   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:53.811906   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:53.846684   64758 cri.go:89] found id: ""
	I0804 00:17:53.846716   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.846726   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:53.846736   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:53.846798   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:53.883550   64758 cri.go:89] found id: ""
	I0804 00:17:53.883581   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.883592   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:53.883600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:53.883662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:53.921031   64758 cri.go:89] found id: ""
	I0804 00:17:53.921061   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.921072   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:53.921080   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:53.921153   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:53.960338   64758 cri.go:89] found id: ""
	I0804 00:17:53.960364   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.960374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:53.960381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:53.960441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:53.998404   64758 cri.go:89] found id: ""
	I0804 00:17:53.998434   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.998450   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:53.998458   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:53.998520   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:54.033417   64758 cri.go:89] found id: ""
	I0804 00:17:54.033444   64758 logs.go:276] 0 containers: []
	W0804 00:17:54.033453   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:54.033461   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:54.033473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:54.071945   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:54.071971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:54.124614   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:54.124644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:54.140757   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:54.140783   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:54.241735   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:54.241754   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:54.241769   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:56.821591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:56.836569   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:56.836631   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:56.872013   64758 cri.go:89] found id: ""
	I0804 00:17:56.872039   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.872048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:56.872054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:56.872110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:52.612077   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:55.111566   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:52.828335   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:54.830625   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:56.831382   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:57.101663   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.600253   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:56.908022   64758 cri.go:89] found id: ""
	I0804 00:17:56.908051   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.908061   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:56.908067   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:56.908114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:56.943309   64758 cri.go:89] found id: ""
	I0804 00:17:56.943336   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.943347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:56.943359   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:56.943415   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:56.977799   64758 cri.go:89] found id: ""
	I0804 00:17:56.977839   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.977847   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:56.977853   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:56.977916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:57.015185   64758 cri.go:89] found id: ""
	I0804 00:17:57.015213   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.015223   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:57.015237   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:57.015295   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:57.051856   64758 cri.go:89] found id: ""
	I0804 00:17:57.051879   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.051887   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:57.051893   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:57.051944   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:57.086349   64758 cri.go:89] found id: ""
	I0804 00:17:57.086376   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.086387   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:57.086393   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:57.086439   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:57.125005   64758 cri.go:89] found id: ""
	I0804 00:17:57.125048   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.125064   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:57.125076   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:57.125090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:57.200348   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:57.200382   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:57.240899   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:57.240924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.294331   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:57.294375   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:57.308388   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:57.308429   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:57.382602   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:59.883070   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:59.897055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:59.897116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:59.932983   64758 cri.go:89] found id: ""
	I0804 00:17:59.933012   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.933021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:59.933029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:59.933088   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:59.971781   64758 cri.go:89] found id: ""
	I0804 00:17:59.971807   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.971815   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:59.971820   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:59.971878   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:00.008381   64758 cri.go:89] found id: ""
	I0804 00:18:00.008406   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.008414   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:00.008419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:00.008483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:00.053257   64758 cri.go:89] found id: ""
	I0804 00:18:00.053281   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.053290   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:00.053295   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:00.053342   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:00.089891   64758 cri.go:89] found id: ""
	I0804 00:18:00.089925   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.089936   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:00.089943   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:00.090008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:00.129833   64758 cri.go:89] found id: ""
	I0804 00:18:00.129863   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.129875   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:00.129884   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:00.129942   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:00.181324   64758 cri.go:89] found id: ""
	I0804 00:18:00.181390   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.181403   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:00.181410   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:00.181471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:00.224426   64758 cri.go:89] found id: ""
	I0804 00:18:00.224451   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.224459   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:00.224467   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:00.224481   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:00.240122   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:00.240155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:00.317324   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:00.317346   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:00.317379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:00.398917   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:00.398952   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:00.440730   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:00.440758   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.111741   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.611509   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.327597   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:01.328678   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:02.099384   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:04.100512   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:02.992128   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:03.006787   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:03.006870   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:03.041291   64758 cri.go:89] found id: ""
	I0804 00:18:03.041321   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.041332   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:03.041341   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:03.041427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:03.077822   64758 cri.go:89] found id: ""
	I0804 00:18:03.077851   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.077863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:03.077871   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:03.077928   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:03.116579   64758 cri.go:89] found id: ""
	I0804 00:18:03.116603   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.116611   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:03.116616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:03.116662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:03.154904   64758 cri.go:89] found id: ""
	I0804 00:18:03.154931   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.154942   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:03.154950   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:03.155018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:03.190300   64758 cri.go:89] found id: ""
	I0804 00:18:03.190328   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.190341   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:03.190349   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:03.190413   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:03.225975   64758 cri.go:89] found id: ""
	I0804 00:18:03.226004   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.226016   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:03.226023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:03.226087   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:03.271499   64758 cri.go:89] found id: ""
	I0804 00:18:03.271525   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.271535   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:03.271543   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:03.271602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:03.308643   64758 cri.go:89] found id: ""
	I0804 00:18:03.308668   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.308675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:03.308684   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:03.308698   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:03.324528   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:03.324562   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:03.401102   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:03.401125   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:03.401139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:03.481817   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:03.481864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:03.522568   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:03.522601   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.074678   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:06.089765   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:06.089844   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:06.128372   64758 cri.go:89] found id: ""
	I0804 00:18:06.128400   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.128411   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:06.128419   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:06.128467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:06.169488   64758 cri.go:89] found id: ""
	I0804 00:18:06.169515   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.169525   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:06.169532   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:06.169590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:06.207969   64758 cri.go:89] found id: ""
	I0804 00:18:06.207998   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.208009   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:06.208015   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:06.208067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:06.244497   64758 cri.go:89] found id: ""
	I0804 00:18:06.244521   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.244529   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:06.244535   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:06.244592   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:06.282905   64758 cri.go:89] found id: ""
	I0804 00:18:06.282935   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.282945   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:06.282952   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:06.283013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:06.322498   64758 cri.go:89] found id: ""
	I0804 00:18:06.322523   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.322530   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:06.322537   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:06.322583   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:06.361377   64758 cri.go:89] found id: ""
	I0804 00:18:06.361402   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.361412   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:06.361420   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:06.361485   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:06.402082   64758 cri.go:89] found id: ""
	I0804 00:18:06.402112   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.402120   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:06.402128   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:06.402141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.452052   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:06.452089   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:06.466695   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:06.466734   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:06.546115   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:06.546140   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:06.546155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:06.639671   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:06.639708   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:02.111360   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:04.610774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:06.612557   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:03.330392   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:05.828925   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:06.603713   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:09.100060   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:09.193473   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:09.207696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:09.207755   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:09.247757   64758 cri.go:89] found id: ""
	I0804 00:18:09.247784   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.247795   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:09.247802   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:09.247867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:09.285516   64758 cri.go:89] found id: ""
	I0804 00:18:09.285549   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.285559   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:09.285567   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:09.285628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:09.321689   64758 cri.go:89] found id: ""
	I0804 00:18:09.321715   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.321725   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:09.321732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:09.321789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:09.358135   64758 cri.go:89] found id: ""
	I0804 00:18:09.358158   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.358166   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:09.358176   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:09.358223   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:09.393642   64758 cri.go:89] found id: ""
	I0804 00:18:09.393667   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.393675   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:09.393681   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:09.393730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:09.430651   64758 cri.go:89] found id: ""
	I0804 00:18:09.430674   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.430683   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:09.430689   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:09.430734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:09.472433   64758 cri.go:89] found id: ""
	I0804 00:18:09.472460   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.472469   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:09.472474   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:09.472533   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:09.511147   64758 cri.go:89] found id: ""
	I0804 00:18:09.511171   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.511179   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:09.511187   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:09.511207   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:09.560099   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:09.560142   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:09.574609   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:09.574641   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:09.646863   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:09.646891   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:09.646906   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:09.727309   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:09.727352   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:09.111726   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:11.611445   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:08.329278   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:10.827361   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:11.600326   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:14.099811   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:12.268925   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:12.284737   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:12.284813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:12.326015   64758 cri.go:89] found id: ""
	I0804 00:18:12.326036   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.326044   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:12.326049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:12.326095   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:12.374096   64758 cri.go:89] found id: ""
	I0804 00:18:12.374129   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.374138   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:12.374143   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:12.374235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:12.426467   64758 cri.go:89] found id: ""
	I0804 00:18:12.426493   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.426502   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:12.426509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:12.426570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:12.485034   64758 cri.go:89] found id: ""
	I0804 00:18:12.485060   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.485072   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:12.485079   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:12.485138   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:12.523490   64758 cri.go:89] found id: ""
	I0804 00:18:12.523517   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.523525   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:12.523530   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:12.523577   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:12.563318   64758 cri.go:89] found id: ""
	I0804 00:18:12.563347   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.563358   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:12.563365   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:12.563425   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:12.600455   64758 cri.go:89] found id: ""
	I0804 00:18:12.600482   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.600492   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:12.600499   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:12.600566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:12.641146   64758 cri.go:89] found id: ""
	I0804 00:18:12.641170   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.641178   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:12.641186   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:12.641197   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:12.697240   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:12.697274   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:12.711399   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:12.711432   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:12.794022   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:12.794050   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:12.794067   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:12.881327   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:12.881379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:15.425765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:15.439338   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:15.439420   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:15.477964   64758 cri.go:89] found id: ""
	I0804 00:18:15.477991   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.478002   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:15.478009   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:15.478069   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:15.514554   64758 cri.go:89] found id: ""
	I0804 00:18:15.514574   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.514583   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:15.514588   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:15.514636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:15.549702   64758 cri.go:89] found id: ""
	I0804 00:18:15.549732   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.549741   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:15.549747   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:15.549813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:15.584619   64758 cri.go:89] found id: ""
	I0804 00:18:15.584663   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.584675   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:15.584683   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:15.584746   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:15.625084   64758 cri.go:89] found id: ""
	I0804 00:18:15.625111   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.625121   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:15.625128   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:15.625192   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:15.666629   64758 cri.go:89] found id: ""
	I0804 00:18:15.666655   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.666664   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:15.666673   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:15.666726   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:15.704287   64758 cri.go:89] found id: ""
	I0804 00:18:15.704316   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.704324   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:15.704330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:15.704383   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:15.740629   64758 cri.go:89] found id: ""
	I0804 00:18:15.740659   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.740668   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:15.740678   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:15.740702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:15.794093   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:15.794124   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:15.807629   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:15.807659   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:15.887638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:15.887665   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:15.887680   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:15.972935   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:15.972978   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:13.611758   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:15.613472   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:13.327640   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:15.827432   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:16.100599   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:18.603192   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:18.518022   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:18.532360   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:18.532433   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:18.565519   64758 cri.go:89] found id: ""
	I0804 00:18:18.565544   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.565552   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:18.565557   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:18.565612   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:18.599939   64758 cri.go:89] found id: ""
	I0804 00:18:18.599967   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.599978   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:18.599985   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:18.600055   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:18.639035   64758 cri.go:89] found id: ""
	I0804 00:18:18.639062   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.639070   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:18.639076   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:18.639124   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:18.677188   64758 cri.go:89] found id: ""
	I0804 00:18:18.677210   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.677218   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:18.677223   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:18.677268   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:18.715892   64758 cri.go:89] found id: ""
	I0804 00:18:18.715921   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.715932   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:18.715940   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:18.716005   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:18.752274   64758 cri.go:89] found id: ""
	I0804 00:18:18.752298   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.752307   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:18.752313   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:18.752368   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:18.795251   64758 cri.go:89] found id: ""
	I0804 00:18:18.795279   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.795288   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:18.795293   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:18.795353   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.830842   64758 cri.go:89] found id: ""
	I0804 00:18:18.830866   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.830874   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:18.830882   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:18.830893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:18.883687   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:18.883719   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:18.898406   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:18.898433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:18.973191   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:18.973215   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:18.973231   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:19.054094   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:19.054141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:21.597245   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:21.612534   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:21.612605   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:21.649391   64758 cri.go:89] found id: ""
	I0804 00:18:21.649415   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.649426   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:21.649434   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:21.649492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:21.683202   64758 cri.go:89] found id: ""
	I0804 00:18:21.683226   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.683233   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:21.683244   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:21.683300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:21.717450   64758 cri.go:89] found id: ""
	I0804 00:18:21.717475   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.717484   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:21.717489   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:21.717547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:21.752559   64758 cri.go:89] found id: ""
	I0804 00:18:21.752588   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.752596   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:21.752602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:21.752650   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:21.788336   64758 cri.go:89] found id: ""
	I0804 00:18:21.788364   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.788375   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:21.788381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:21.788428   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:21.829404   64758 cri.go:89] found id: ""
	I0804 00:18:21.829428   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.829436   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:21.829443   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:21.829502   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:21.869473   64758 cri.go:89] found id: ""
	I0804 00:18:21.869504   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.869515   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:21.869521   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:21.869587   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.111377   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:20.610253   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:17.827889   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:20.327830   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:21.100486   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:23.599788   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:25.601620   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:21.909883   64758 cri.go:89] found id: ""
	I0804 00:18:21.909907   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.909915   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:21.909923   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:21.909940   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:21.925038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:21.925071   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:22.000261   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.000281   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:22.000294   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:22.082813   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:22.082846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:22.126741   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:22.126774   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:24.677246   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:24.692404   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:24.692467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:24.739001   64758 cri.go:89] found id: ""
	I0804 00:18:24.739039   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.739049   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:24.739054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:24.739119   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:24.779558   64758 cri.go:89] found id: ""
	I0804 00:18:24.779586   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.779597   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:24.779605   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:24.779664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:24.819257   64758 cri.go:89] found id: ""
	I0804 00:18:24.819284   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.819295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:24.819301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:24.819363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:24.862504   64758 cri.go:89] found id: ""
	I0804 00:18:24.862531   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.862539   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:24.862544   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:24.862599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:24.899605   64758 cri.go:89] found id: ""
	I0804 00:18:24.899637   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.899649   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:24.899656   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:24.899716   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:24.936575   64758 cri.go:89] found id: ""
	I0804 00:18:24.936604   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.936612   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:24.936618   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:24.936667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:24.971736   64758 cri.go:89] found id: ""
	I0804 00:18:24.971764   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.971775   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:24.971782   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:24.971851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:25.010214   64758 cri.go:89] found id: ""
	I0804 00:18:25.010244   64758 logs.go:276] 0 containers: []
	W0804 00:18:25.010253   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:25.010265   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:25.010279   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:25.091145   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:25.091186   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:25.137574   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:25.137603   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:25.189559   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:25.189593   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:25.204725   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:25.204763   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:25.278903   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.612077   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:25.111666   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:22.827542   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:24.829587   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:27.326922   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:28.100576   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:30.603955   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:27.779500   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:27.793548   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:27.793628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:27.830811   64758 cri.go:89] found id: ""
	I0804 00:18:27.830844   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.830854   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:27.830862   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:27.830919   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:27.869966   64758 cri.go:89] found id: ""
	I0804 00:18:27.869991   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.869998   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:27.870004   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:27.870062   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:27.909474   64758 cri.go:89] found id: ""
	I0804 00:18:27.909496   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.909504   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:27.909509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:27.909567   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:27.948588   64758 cri.go:89] found id: ""
	I0804 00:18:27.948613   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.948625   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:27.948632   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:27.948704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:27.991957   64758 cri.go:89] found id: ""
	I0804 00:18:27.991979   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.991987   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:27.991993   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:27.992052   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:28.029516   64758 cri.go:89] found id: ""
	I0804 00:18:28.029544   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.029555   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:28.029562   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:28.029627   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:28.067851   64758 cri.go:89] found id: ""
	I0804 00:18:28.067879   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.067891   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:28.067898   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:28.067957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:28.107488   64758 cri.go:89] found id: ""
	I0804 00:18:28.107514   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.107524   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:28.107534   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:28.107548   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:28.158490   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:28.158523   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:28.172000   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:28.172030   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:28.247803   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:28.247823   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:28.247839   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:28.326695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:28.326727   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:30.867241   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:30.881074   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:30.881146   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:30.919078   64758 cri.go:89] found id: ""
	I0804 00:18:30.919105   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.919115   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:30.919122   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:30.919184   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:30.954436   64758 cri.go:89] found id: ""
	I0804 00:18:30.954463   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.954474   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:30.954481   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:30.954546   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:30.993080   64758 cri.go:89] found id: ""
	I0804 00:18:30.993110   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.993121   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:30.993129   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:30.993188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:31.031465   64758 cri.go:89] found id: ""
	I0804 00:18:31.031493   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.031504   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:31.031512   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:31.031570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:31.067374   64758 cri.go:89] found id: ""
	I0804 00:18:31.067405   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.067416   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:31.067423   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:31.067493   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:31.104021   64758 cri.go:89] found id: ""
	I0804 00:18:31.104048   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.104059   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:31.104066   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:31.104128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:31.146995   64758 cri.go:89] found id: ""
	I0804 00:18:31.147023   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.147033   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:31.147040   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:31.147106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:31.184708   64758 cri.go:89] found id: ""
	I0804 00:18:31.184739   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.184749   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:31.184760   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:31.184776   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:31.237743   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:31.237781   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:31.252038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:31.252070   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:31.326357   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:31.326380   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:31.326401   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:31.408212   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:31.408256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:27.610666   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:29.610899   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:31.611472   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:29.827980   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:32.326666   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:33.099814   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:35.100740   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:33.954396   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:33.968311   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:33.968384   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:34.006574   64758 cri.go:89] found id: ""
	I0804 00:18:34.006605   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.006625   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:34.006635   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:34.006698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:34.042400   64758 cri.go:89] found id: ""
	I0804 00:18:34.042427   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.042435   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:34.042441   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:34.042492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:34.080769   64758 cri.go:89] found id: ""
	I0804 00:18:34.080793   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.080804   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:34.080810   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:34.080877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:34.118283   64758 cri.go:89] found id: ""
	I0804 00:18:34.118311   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.118320   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:34.118326   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:34.118377   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:34.153679   64758 cri.go:89] found id: ""
	I0804 00:18:34.153708   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.153719   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:34.153727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:34.153780   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:34.189618   64758 cri.go:89] found id: ""
	I0804 00:18:34.189674   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.189686   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:34.189696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:34.189770   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:34.224628   64758 cri.go:89] found id: ""
	I0804 00:18:34.224666   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.224677   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:34.224684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:34.224744   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:34.265343   64758 cri.go:89] found id: ""
	I0804 00:18:34.265389   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.265399   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:34.265409   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:34.265428   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:34.337992   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:34.338014   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:34.338025   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:34.420224   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:34.420263   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:34.462009   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:34.462042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:34.520087   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:34.520120   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:34.111351   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:36.112271   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:34.328807   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:36.827190   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:37.599447   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:40.099291   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:37.035398   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:37.048955   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:37.049024   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:37.087433   64758 cri.go:89] found id: ""
	I0804 00:18:37.087460   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.087470   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:37.087478   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:37.087542   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:37.128227   64758 cri.go:89] found id: ""
	I0804 00:18:37.128255   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.128267   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:37.128275   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:37.128328   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:37.165371   64758 cri.go:89] found id: ""
	I0804 00:18:37.165405   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.165415   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:37.165424   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:37.165486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:37.201168   64758 cri.go:89] found id: ""
	I0804 00:18:37.201198   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.201209   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:37.201217   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:37.201278   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:37.237378   64758 cri.go:89] found id: ""
	I0804 00:18:37.237406   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.237414   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:37.237419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:37.237465   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:37.273425   64758 cri.go:89] found id: ""
	I0804 00:18:37.273456   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.273467   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:37.273475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:37.273547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:37.313019   64758 cri.go:89] found id: ""
	I0804 00:18:37.313048   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.313056   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:37.313061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:37.313116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:37.354741   64758 cri.go:89] found id: ""
	I0804 00:18:37.354771   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.354779   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:37.354788   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:37.354800   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:37.408703   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:37.408740   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:37.423393   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:37.423419   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:37.497460   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:37.497487   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:37.497501   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:37.579811   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:37.579856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:40.122872   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:40.139106   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:40.139177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:40.178571   64758 cri.go:89] found id: ""
	I0804 00:18:40.178599   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.178610   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:40.178617   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:40.178679   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:40.215680   64758 cri.go:89] found id: ""
	I0804 00:18:40.215714   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.215722   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:40.215728   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:40.215776   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:40.250618   64758 cri.go:89] found id: ""
	I0804 00:18:40.250647   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.250658   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:40.250666   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:40.250729   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:40.289195   64758 cri.go:89] found id: ""
	I0804 00:18:40.289223   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.289233   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:40.289240   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:40.289296   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:40.330961   64758 cri.go:89] found id: ""
	I0804 00:18:40.330988   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.330998   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:40.331006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:40.331056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:40.376435   64758 cri.go:89] found id: ""
	I0804 00:18:40.376465   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.376478   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:40.376487   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:40.376558   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:40.416415   64758 cri.go:89] found id: ""
	I0804 00:18:40.416447   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.416459   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:40.416465   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:40.416535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:40.452958   64758 cri.go:89] found id: ""
	I0804 00:18:40.452996   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.453007   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:40.453018   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:40.453036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:40.503775   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:40.503808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:40.517825   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:40.517855   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:40.587818   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:40.587847   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:40.587861   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:40.674139   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:40.674183   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:38.611068   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:40.611923   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:39.326489   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:41.327327   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:42.100795   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:44.602441   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:43.217266   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:43.232190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:43.232262   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:43.270127   64758 cri.go:89] found id: ""
	I0804 00:18:43.270156   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.270163   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:43.270169   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:43.270219   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:43.309401   64758 cri.go:89] found id: ""
	I0804 00:18:43.309429   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.309439   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:43.309446   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:43.309503   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:43.347210   64758 cri.go:89] found id: ""
	I0804 00:18:43.347235   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.347242   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:43.347247   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:43.347300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:43.382548   64758 cri.go:89] found id: ""
	I0804 00:18:43.382578   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.382588   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:43.382595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:43.382658   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:43.422076   64758 cri.go:89] found id: ""
	I0804 00:18:43.422102   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.422113   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:43.422121   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:43.422168   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:43.458525   64758 cri.go:89] found id: ""
	I0804 00:18:43.458552   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.458560   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:43.458566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:43.458623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:43.498134   64758 cri.go:89] found id: ""
	I0804 00:18:43.498157   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.498165   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:43.498170   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:43.498217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:43.543289   64758 cri.go:89] found id: ""
	I0804 00:18:43.543312   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.543320   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:43.543328   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:43.543338   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:43.593489   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:43.593521   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:43.607838   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:43.607869   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:43.682791   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:43.682813   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:43.682826   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:43.761695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:43.761737   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.305385   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:46.320003   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:46.320063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:46.367941   64758 cri.go:89] found id: ""
	I0804 00:18:46.367969   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.367980   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:46.367986   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:46.368058   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:46.422540   64758 cri.go:89] found id: ""
	I0804 00:18:46.422563   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.422572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:46.422578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:46.422637   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:46.470192   64758 cri.go:89] found id: ""
	I0804 00:18:46.470238   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.470248   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:46.470257   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:46.470316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:46.512375   64758 cri.go:89] found id: ""
	I0804 00:18:46.512399   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.512408   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:46.512413   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:46.512471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:46.546547   64758 cri.go:89] found id: ""
	I0804 00:18:46.546580   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.546592   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:46.546600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:46.546665   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:46.583598   64758 cri.go:89] found id: ""
	I0804 00:18:46.583621   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.583630   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:46.583636   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:46.583692   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:46.621066   64758 cri.go:89] found id: ""
	I0804 00:18:46.621101   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.621116   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:46.621122   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:46.621177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:46.654115   64758 cri.go:89] found id: ""
	I0804 00:18:46.654149   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.654162   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:46.654174   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:46.654191   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:46.738542   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:46.738582   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.778894   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:46.778923   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:46.833225   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:46.833257   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:46.847222   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:46.847247   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:18:42.612522   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:45.110927   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:43.327420   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:45.327936   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:47.328380   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:46.604576   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.100232   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	W0804 00:18:46.922590   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.423639   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:49.437417   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:49.437490   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:49.474889   64758 cri.go:89] found id: ""
	I0804 00:18:49.474914   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.474923   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:49.474929   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:49.474986   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:49.512860   64758 cri.go:89] found id: ""
	I0804 00:18:49.512889   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.512900   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:49.512908   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:49.512965   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:49.550558   64758 cri.go:89] found id: ""
	I0804 00:18:49.550594   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.550603   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:49.550611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:49.550671   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:49.587779   64758 cri.go:89] found id: ""
	I0804 00:18:49.587810   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.587823   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:49.587831   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:49.587890   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:49.630307   64758 cri.go:89] found id: ""
	I0804 00:18:49.630333   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.630344   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:49.630352   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:49.630411   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:49.665013   64758 cri.go:89] found id: ""
	I0804 00:18:49.665046   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.665057   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:49.665064   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:49.665127   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:49.701375   64758 cri.go:89] found id: ""
	I0804 00:18:49.701401   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.701410   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:49.701415   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:49.701472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:49.737237   64758 cri.go:89] found id: ""
	I0804 00:18:49.737261   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.737269   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:49.737278   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:49.737291   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:49.790998   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:49.791033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:49.804933   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:49.804965   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:49.877997   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.878019   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:49.878035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:49.963836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:49.963872   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:47.611774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.612581   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.616560   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.827900   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.829950   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.599613   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:53.600496   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:52.506621   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:52.521482   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:52.521553   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:52.555980   64758 cri.go:89] found id: ""
	I0804 00:18:52.556010   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.556021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:52.556029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:52.556094   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:52.593088   64758 cri.go:89] found id: ""
	I0804 00:18:52.593119   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.593130   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:52.593138   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:52.593197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:52.632058   64758 cri.go:89] found id: ""
	I0804 00:18:52.632088   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.632107   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:52.632115   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:52.632177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:52.668701   64758 cri.go:89] found id: ""
	I0804 00:18:52.668730   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.668739   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:52.668745   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:52.668814   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:52.705041   64758 cri.go:89] found id: ""
	I0804 00:18:52.705068   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.705075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:52.705089   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:52.705149   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:52.743304   64758 cri.go:89] found id: ""
	I0804 00:18:52.743327   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.743335   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:52.743340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:52.743397   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:52.781020   64758 cri.go:89] found id: ""
	I0804 00:18:52.781050   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.781060   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:52.781073   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:52.781134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:52.820979   64758 cri.go:89] found id: ""
	I0804 00:18:52.821004   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.821014   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:52.821024   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:52.821042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:52.876450   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:52.876488   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:52.890529   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:52.890566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:52.960682   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:52.960710   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:52.960725   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:53.044000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:53.044040   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:55.601594   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:55.615574   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:55.615645   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:55.655116   64758 cri.go:89] found id: ""
	I0804 00:18:55.655146   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.655157   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:55.655164   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:55.655217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:55.695809   64758 cri.go:89] found id: ""
	I0804 00:18:55.695837   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.695846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:55.695851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:55.695909   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:55.732784   64758 cri.go:89] found id: ""
	I0804 00:18:55.732811   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.732822   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:55.732828   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:55.732920   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:55.773316   64758 cri.go:89] found id: ""
	I0804 00:18:55.773338   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.773347   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:55.773368   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:55.773416   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:55.808886   64758 cri.go:89] found id: ""
	I0804 00:18:55.808913   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.808924   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:55.808931   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:55.808990   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:55.848471   64758 cri.go:89] found id: ""
	I0804 00:18:55.848499   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.848507   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:55.848513   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:55.848568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:55.884088   64758 cri.go:89] found id: ""
	I0804 00:18:55.884117   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.884128   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:55.884134   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:55.884194   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:55.918194   64758 cri.go:89] found id: ""
	I0804 00:18:55.918222   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.918233   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:55.918243   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:55.918264   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:55.932685   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:55.932717   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:56.003817   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:56.003840   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:56.003856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:56.087804   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:56.087846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:56.129959   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:56.129993   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:54.111584   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.610664   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:54.327283   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.328332   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.100620   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.601669   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:00.604763   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.685077   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:58.698624   64758 kubeadm.go:597] duration metric: took 4m4.179874556s to restartPrimaryControlPlane
	W0804 00:18:58.698704   64758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:18:58.698731   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:18:58.611004   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:00.611252   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.828188   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:01.329218   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.100214   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:05.101275   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.967117   64758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.268366381s)
	I0804 00:19:03.967202   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:19:03.982098   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:19:03.991994   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:19:04.002780   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:19:04.002802   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:19:04.002845   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:19:04.012216   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:19:04.012279   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:19:04.021463   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:19:04.030689   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:19:04.030743   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:19:04.040801   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.050496   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:19:04.050558   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.060782   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:19:04.071595   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:19:04.071673   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:19:04.082123   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:19:04.313172   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:19:02.611712   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:05.111575   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.827427   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:06.327317   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:07.599775   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:09.599814   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:07.611608   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:10.110194   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:08.333681   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:10.829626   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:11.601081   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:14.099098   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:12.110388   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:14.111401   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:16.610774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:13.327035   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:15.327695   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:17.327749   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:16.100543   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:18.602723   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:20.603470   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:18.611336   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:21.111798   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:19.329120   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:21.826869   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:22.605600   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:25.101500   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:23.610581   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:25.610814   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:24.326982   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:26.827772   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:27.599557   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:29.600283   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:28.110748   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:30.111027   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:29.327031   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:31.328581   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:32.101571   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:34.601251   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:32.610784   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:34.612611   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:33.828237   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:35.828319   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:37.099717   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:39.100492   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:37.111009   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:39.610805   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:38.326730   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:40.327548   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:42.330066   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:41.600239   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:43.600686   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:45.601464   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:42.110900   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:44.610221   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:45.605124   65087 pod_ready.go:81] duration metric: took 4m0.000843677s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" ...
	E0804 00:19:45.605152   65087 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0804 00:19:45.605175   65087 pod_ready.go:38] duration metric: took 4m13.615224515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:19:45.605208   65087 kubeadm.go:597] duration metric: took 4m21.736484018s to restartPrimaryControlPlane
	W0804 00:19:45.605273   65087 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:19:45.605304   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:19:44.827547   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:47.329541   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:48.101237   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:50.603754   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:49.826561   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:51.828643   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:53.100714   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:55.102037   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:53.832996   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:54.830906   65441 pod_ready.go:81] duration metric: took 4m0.010324747s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	E0804 00:19:54.830936   65441 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0804 00:19:54.830947   65441 pod_ready.go:38] duration metric: took 4m4.842701336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:19:54.830968   65441 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:19:54.831003   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:19:54.831070   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:19:54.887772   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:54.887804   65441 cri.go:89] found id: ""
	I0804 00:19:54.887815   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:19:54.887877   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:54.892740   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:19:54.892801   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:19:54.943044   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:54.943082   65441 cri.go:89] found id: ""
	I0804 00:19:54.943092   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:19:54.943164   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:54.947699   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:19:54.947765   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:19:54.997280   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:54.997302   65441 cri.go:89] found id: ""
	I0804 00:19:54.997311   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:19:54.997380   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.005574   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:19:55.005642   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:19:55.066824   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:55.066845   65441 cri.go:89] found id: ""
	I0804 00:19:55.066852   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:19:55.066906   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.071713   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:19:55.071779   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:19:55.116381   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:55.116406   65441 cri.go:89] found id: ""
	I0804 00:19:55.116414   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:19:55.116468   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.121174   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:19:55.121237   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:19:55.168300   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:55.168323   65441 cri.go:89] found id: ""
	I0804 00:19:55.168331   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:19:55.168381   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.173450   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:19:55.173509   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:19:55.218999   65441 cri.go:89] found id: ""
	I0804 00:19:55.219030   65441 logs.go:276] 0 containers: []
	W0804 00:19:55.219041   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:19:55.219048   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:19:55.219115   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:19:55.263696   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:55.263723   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:55.263727   65441 cri.go:89] found id: ""
	I0804 00:19:55.263734   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:19:55.263789   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.269001   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.277864   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:19:55.277899   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:55.323692   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:19:55.323729   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:55.364971   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:19:55.365005   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:19:55.871942   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:19:55.871983   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:19:55.929828   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:19:55.929869   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:19:55.987389   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:19:55.987425   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:56.041330   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:19:56.041381   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:56.082524   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:19:56.082556   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:56.122545   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:19:56.122572   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:56.178249   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:19:56.178288   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:56.219273   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:19:56.219300   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:19:56.235345   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:19:56.235389   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:19:56.370660   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:19:56.370692   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:57.600248   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:00.100920   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:58.936934   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:19:58.953624   65441 api_server.go:72] duration metric: took 4m14.22488371s to wait for apiserver process to appear ...
	I0804 00:19:58.953655   65441 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:19:58.953700   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:19:58.953764   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:19:58.997408   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:58.997434   65441 cri.go:89] found id: ""
	I0804 00:19:58.997443   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:19:58.997492   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.004398   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:19:59.004466   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:19:59.041483   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:59.041510   65441 cri.go:89] found id: ""
	I0804 00:19:59.041518   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:19:59.041568   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.045754   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:19:59.045825   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:19:59.081738   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:59.081756   65441 cri.go:89] found id: ""
	I0804 00:19:59.081764   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:19:59.081809   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.086297   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:19:59.086348   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:19:59.124421   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:59.124440   65441 cri.go:89] found id: ""
	I0804 00:19:59.124447   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:19:59.124494   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.128612   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:19:59.128677   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:19:59.165702   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:59.165728   65441 cri.go:89] found id: ""
	I0804 00:19:59.165737   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:19:59.165791   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.170016   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:19:59.170103   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:19:59.205275   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:59.205299   65441 cri.go:89] found id: ""
	I0804 00:19:59.205307   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:19:59.205377   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.209637   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:19:59.209699   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:19:59.244254   65441 cri.go:89] found id: ""
	I0804 00:19:59.244281   65441 logs.go:276] 0 containers: []
	W0804 00:19:59.244290   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:19:59.244295   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:19:59.244343   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:19:59.281850   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:59.281876   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:59.281880   65441 cri.go:89] found id: ""
	I0804 00:19:59.281887   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:19:59.281935   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.286423   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.291108   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:19:59.291134   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:59.340778   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:19:59.340808   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:59.379258   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:19:59.379288   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:59.418902   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:19:59.418932   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:19:59.875668   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:19:59.875708   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:19:59.932947   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:19:59.932980   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:59.980190   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:19:59.980224   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:00.024331   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:20:00.024359   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:00.064676   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:20:00.064701   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:00.117684   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:20:00.117717   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:00.153654   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:20:00.153683   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:00.200840   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:00.200869   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:00.214380   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:00.214410   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:02.101240   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:04.600064   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:02.832546   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:20:02.837684   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 200:
	ok
	I0804 00:20:02.838736   65441 api_server.go:141] control plane version: v1.30.3
	I0804 00:20:02.838763   65441 api_server.go:131] duration metric: took 3.885096913s to wait for apiserver health ...
	I0804 00:20:02.838773   65441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:02.838798   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:02.838856   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:02.878530   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:20:02.878556   65441 cri.go:89] found id: ""
	I0804 00:20:02.878565   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:20:02.878628   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.883263   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:02.883338   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:02.921989   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:20:02.922009   65441 cri.go:89] found id: ""
	I0804 00:20:02.922017   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:20:02.922062   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.928690   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:02.928767   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:02.967469   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:20:02.967490   65441 cri.go:89] found id: ""
	I0804 00:20:02.967498   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:20:02.967544   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.972155   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:02.972217   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:03.011875   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:03.011900   65441 cri.go:89] found id: ""
	I0804 00:20:03.011910   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:20:03.011966   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.016326   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:03.016395   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:03.057114   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:03.057137   65441 cri.go:89] found id: ""
	I0804 00:20:03.057145   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:20:03.057206   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.061528   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:03.061592   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:03.101778   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:03.101832   65441 cri.go:89] found id: ""
	I0804 00:20:03.101842   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:20:03.101902   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.106292   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:03.106368   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:03.146453   65441 cri.go:89] found id: ""
	I0804 00:20:03.146484   65441 logs.go:276] 0 containers: []
	W0804 00:20:03.146496   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:03.146504   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:03.146569   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:03.185861   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:03.185884   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:20:03.185887   65441 cri.go:89] found id: ""
	I0804 00:20:03.185894   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:20:03.185941   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.190490   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.194727   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:03.194750   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:03.308015   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:20:03.308052   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:20:03.358699   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:20:03.358732   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:20:03.410398   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:20:03.410430   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:20:03.450651   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:03.450685   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:03.859092   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:20:03.859145   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:03.905500   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:20:03.905529   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:03.951014   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:03.951047   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:04.003275   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:04.003311   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:04.017574   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:20:04.017608   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:20:04.054252   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:20:04.054283   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:04.094524   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:20:04.094558   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:04.131163   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:20:04.131192   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:06.691154   65441 system_pods.go:59] 8 kube-system pods found
	I0804 00:20:06.691193   65441 system_pods.go:61] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running
	I0804 00:20:06.691199   65441 system_pods.go:61] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running
	I0804 00:20:06.691203   65441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running
	I0804 00:20:06.691209   65441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running
	I0804 00:20:06.691213   65441 system_pods.go:61] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:20:06.691218   65441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running
	I0804 00:20:06.691226   65441 system_pods.go:61] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:06.691232   65441 system_pods.go:61] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:20:06.691244   65441 system_pods.go:74] duration metric: took 3.852463199s to wait for pod list to return data ...
	I0804 00:20:06.691257   65441 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:06.693724   65441 default_sa.go:45] found service account: "default"
	I0804 00:20:06.693755   65441 default_sa.go:55] duration metric: took 2.486182ms for default service account to be created ...
	I0804 00:20:06.693767   65441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:06.698925   65441 system_pods.go:86] 8 kube-system pods found
	I0804 00:20:06.698950   65441 system_pods.go:89] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running
	I0804 00:20:06.698956   65441 system_pods.go:89] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running
	I0804 00:20:06.698962   65441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running
	I0804 00:20:06.698968   65441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running
	I0804 00:20:06.698972   65441 system_pods.go:89] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:20:06.698976   65441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running
	I0804 00:20:06.698983   65441 system_pods.go:89] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:06.698990   65441 system_pods.go:89] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:20:06.698997   65441 system_pods.go:126] duration metric: took 5.224971ms to wait for k8s-apps to be running ...
	I0804 00:20:06.699003   65441 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:06.699047   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:06.714188   65441 system_svc.go:56] duration metric: took 15.17801ms WaitForService to wait for kubelet
	I0804 00:20:06.714213   65441 kubeadm.go:582] duration metric: took 4m21.985480612s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:06.714232   65441 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:06.716717   65441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:06.716743   65441 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:06.716757   65441 node_conditions.go:105] duration metric: took 2.521245ms to run NodePressure ...
	I0804 00:20:06.716771   65441 start.go:241] waiting for startup goroutines ...
	I0804 00:20:06.716780   65441 start.go:246] waiting for cluster config update ...
	I0804 00:20:06.716796   65441 start.go:255] writing updated cluster config ...
	I0804 00:20:06.717156   65441 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:06.765983   65441 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:20:06.768482   65441 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-969068" cluster and "default" namespace by default
	I0804 00:20:06.600233   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:08.603829   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:11.852948   65087 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.247618249s)
	I0804 00:20:11.853025   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:11.870882   65087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:20:11.882005   65087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:20:11.892505   65087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:20:11.892527   65087 kubeadm.go:157] found existing configuration files:
	
	I0804 00:20:11.892570   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:20:11.902005   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:20:11.902061   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:20:11.911585   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:20:11.921837   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:20:11.921911   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:20:11.101091   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:13.607073   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:14.600605   64502 pod_ready.go:81] duration metric: took 4m0.007136508s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	E0804 00:20:14.600629   64502 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0804 00:20:14.600637   64502 pod_ready.go:38] duration metric: took 4m5.120472791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:14.600651   64502 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:20:14.600675   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:14.600717   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:14.669699   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:14.669724   64502 cri.go:89] found id: ""
	I0804 00:20:14.669733   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:14.669789   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.674907   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:14.674978   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:14.720830   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:14.720867   64502 cri.go:89] found id: ""
	I0804 00:20:14.720877   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:14.720937   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.726667   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:14.726729   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:14.778216   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:14.778247   64502 cri.go:89] found id: ""
	I0804 00:20:14.778256   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:14.778321   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.785349   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:14.785433   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:14.836381   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:14.836408   64502 cri.go:89] found id: ""
	I0804 00:20:14.836416   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:14.836475   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.841662   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:14.841752   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:14.884803   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:14.884827   64502 cri.go:89] found id: ""
	I0804 00:20:14.884836   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:14.884904   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.890625   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:14.890696   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:14.942713   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:14.942732   64502 cri.go:89] found id: ""
	I0804 00:20:14.942739   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:14.942800   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.948335   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:14.948391   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:14.994869   64502 cri.go:89] found id: ""
	I0804 00:20:14.994900   64502 logs.go:276] 0 containers: []
	W0804 00:20:14.994910   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:14.994917   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:14.994985   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:15.034528   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:15.034557   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:15.034564   64502 cri.go:89] found id: ""
	I0804 00:20:15.034574   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:15.034633   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:15.039335   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:15.044600   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:15.044625   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:15.091365   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:15.091398   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:15.144896   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:15.144924   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:15.675849   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:15.675901   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:15.691640   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:15.691699   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:11.931844   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:20:11.941369   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:20:11.941430   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:20:11.951279   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:20:11.961201   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:20:11.961275   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:20:11.972150   65087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:20:12.024567   65087 kubeadm.go:310] W0804 00:20:12.001791    2996 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0804 00:20:12.025287   65087 kubeadm.go:310] W0804 00:20:12.002530    2996 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0804 00:20:12.154034   65087 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:20:20.358593   65087 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0804 00:20:20.358649   65087 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:20:20.358721   65087 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:20:20.358834   65087 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:20:20.358953   65087 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 00:20:20.359013   65087 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:20:20.360498   65087 out.go:204]   - Generating certificates and keys ...
	I0804 00:20:20.360590   65087 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:20:20.360692   65087 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:20:20.360767   65087 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:20:20.360821   65087 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:20:20.360915   65087 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:20:20.360971   65087 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:20:20.361042   65087 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:20:20.361124   65087 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:20:20.361228   65087 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:20:20.361307   65087 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:20:20.361342   65087 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:20:20.361436   65087 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:20:20.361523   65087 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:20:20.361592   65087 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 00:20:20.361642   65087 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:20:20.361698   65087 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:20:20.361746   65087 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:20:20.361815   65087 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:20:20.361881   65087 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:20:20.363214   65087 out.go:204]   - Booting up control plane ...
	I0804 00:20:20.363312   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:20:20.363381   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:20:20.363450   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:20:20.363541   65087 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:20:20.363628   65087 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:20:20.363678   65087 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:20:20.363790   65087 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 00:20:20.363889   65087 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 00:20:20.363960   65087 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.009132208s
	I0804 00:20:20.364044   65087 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 00:20:20.364094   65087 kubeadm.go:310] [api-check] The API server is healthy after 4.501833932s
	I0804 00:20:20.364201   65087 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 00:20:20.364321   65087 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 00:20:20.364397   65087 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 00:20:20.364585   65087 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-118016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 00:20:20.364634   65087 kubeadm.go:310] [bootstrap-token] Using token: bbnfwa.jorg7huedw5cbtk2
	I0804 00:20:20.366569   65087 out.go:204]   - Configuring RBAC rules ...
	I0804 00:20:20.366705   65087 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 00:20:20.366823   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 00:20:20.366979   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 00:20:20.367160   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 00:20:20.367275   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 00:20:20.367352   65087 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 00:20:20.367447   65087 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 00:20:20.367510   65087 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 00:20:20.367574   65087 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 00:20:20.367580   65087 kubeadm.go:310] 
	I0804 00:20:20.367629   65087 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 00:20:20.367635   65087 kubeadm.go:310] 
	I0804 00:20:20.367697   65087 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 00:20:20.367703   65087 kubeadm.go:310] 
	I0804 00:20:20.367724   65087 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 00:20:20.367784   65087 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 00:20:20.367828   65087 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 00:20:20.367834   65087 kubeadm.go:310] 
	I0804 00:20:20.367886   65087 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 00:20:20.367903   65087 kubeadm.go:310] 
	I0804 00:20:20.367971   65087 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 00:20:20.367981   65087 kubeadm.go:310] 
	I0804 00:20:20.368050   65087 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 00:20:20.368125   65087 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 00:20:20.368187   65087 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 00:20:20.368193   65087 kubeadm.go:310] 
	I0804 00:20:20.368262   65087 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 00:20:20.368353   65087 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 00:20:20.368367   65087 kubeadm.go:310] 
	I0804 00:20:20.368480   65087 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bbnfwa.jorg7huedw5cbtk2 \
	I0804 00:20:20.368588   65087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0804 00:20:20.368614   65087 kubeadm.go:310] 	--control-plane 
	I0804 00:20:20.368621   65087 kubeadm.go:310] 
	I0804 00:20:20.368705   65087 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 00:20:20.368712   65087 kubeadm.go:310] 
	I0804 00:20:20.368810   65087 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bbnfwa.jorg7huedw5cbtk2 \
	I0804 00:20:20.368933   65087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0804 00:20:20.368947   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:20:20.368957   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:20:20.370303   65087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:20:15.859131   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:15.859169   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:15.917686   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:15.917726   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:15.964285   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:15.964328   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:16.019646   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:16.019679   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:16.069379   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:16.069416   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:16.129752   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:16.129842   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:16.177015   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:16.177043   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:16.220526   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:16.220560   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:18.771509   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:20:18.793252   64502 api_server.go:72] duration metric: took 4m15.042389156s to wait for apiserver process to appear ...
	I0804 00:20:18.793291   64502 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:20:18.793334   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:18.793415   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:18.839339   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:18.839363   64502 cri.go:89] found id: ""
	I0804 00:20:18.839372   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:18.839432   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.843932   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:18.844005   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:18.894398   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:18.894422   64502 cri.go:89] found id: ""
	I0804 00:20:18.894432   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:18.894491   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.899596   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:18.899664   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:18.947077   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:18.947106   64502 cri.go:89] found id: ""
	I0804 00:20:18.947114   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:18.947168   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.952349   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:18.952431   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:18.999336   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:18.999361   64502 cri.go:89] found id: ""
	I0804 00:20:18.999377   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:18.999441   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.005419   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:19.005502   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:19.061388   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:19.061413   64502 cri.go:89] found id: ""
	I0804 00:20:19.061422   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:19.061476   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.066071   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:19.066139   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:19.111849   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:19.111872   64502 cri.go:89] found id: ""
	I0804 00:20:19.111879   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:19.111929   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.116272   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:19.116323   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:19.157387   64502 cri.go:89] found id: ""
	I0804 00:20:19.157414   64502 logs.go:276] 0 containers: []
	W0804 00:20:19.157423   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:19.157431   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:19.157493   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:19.199627   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:19.199654   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:19.199660   64502 cri.go:89] found id: ""
	I0804 00:20:19.199669   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:19.199727   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.204317   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.208707   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:19.208729   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:19.261684   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:19.261717   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:19.309861   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:19.309890   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:19.349376   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:19.349403   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:19.388561   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:19.388590   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:19.466119   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:19.466163   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:19.515539   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:19.515575   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:19.561529   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:19.561556   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:19.626188   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:19.626219   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:19.640348   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:19.640372   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:19.772397   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:19.772439   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:19.827217   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:19.827260   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:20.306543   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:20.306589   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:20.371388   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:20:20.384738   65087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:20:20.404547   65087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:20:20.404607   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:20.404659   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-118016 minikube.k8s.io/updated_at=2024_08_04T00_20_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=no-preload-118016 minikube.k8s.io/primary=true
	I0804 00:20:20.602476   65087 ops.go:34] apiserver oom_adj: -16
	I0804 00:20:20.602551   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:21.103011   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:21.602888   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:22.102779   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:22.603282   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:23.103337   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:23.603522   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.103510   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.603474   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.689895   65087 kubeadm.go:1113] duration metric: took 4.285337247s to wait for elevateKubeSystemPrivileges
	I0804 00:20:24.689931   65087 kubeadm.go:394] duration metric: took 5m0.881315877s to StartCluster
	I0804 00:20:24.689947   65087 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:24.690018   65087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:20:24.691559   65087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:24.691784   65087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:20:24.691848   65087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:20:24.691963   65087 addons.go:69] Setting storage-provisioner=true in profile "no-preload-118016"
	I0804 00:20:24.691977   65087 addons.go:69] Setting default-storageclass=true in profile "no-preload-118016"
	I0804 00:20:24.691999   65087 addons.go:234] Setting addon storage-provisioner=true in "no-preload-118016"
	I0804 00:20:24.692001   65087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-118016"
	I0804 00:20:24.692001   65087 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:20:24.692018   65087 addons.go:69] Setting metrics-server=true in profile "no-preload-118016"
	W0804 00:20:24.692007   65087 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:20:24.692068   65087 addons.go:234] Setting addon metrics-server=true in "no-preload-118016"
	I0804 00:20:24.692086   65087 host.go:66] Checking if "no-preload-118016" exists ...
	W0804 00:20:24.692099   65087 addons.go:243] addon metrics-server should already be in state true
	I0804 00:20:24.692142   65087 host.go:66] Checking if "no-preload-118016" exists ...
	I0804 00:20:24.692440   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692464   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.692494   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692441   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692517   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.692566   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.693590   65087 out.go:177] * Verifying Kubernetes components...
	I0804 00:20:24.695139   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:20:24.708841   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0804 00:20:24.709324   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.709911   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.709937   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.710305   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.710594   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.712827   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0804 00:20:24.712894   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0804 00:20:24.713349   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.713375   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.713884   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.713899   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.713923   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.713942   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.714211   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.714264   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.714421   65087 addons.go:234] Setting addon default-storageclass=true in "no-preload-118016"
	W0804 00:20:24.714440   65087 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:20:24.714468   65087 host.go:66] Checking if "no-preload-118016" exists ...
	I0804 00:20:24.714605   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.714623   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.714801   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.714846   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.714993   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.715014   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.730476   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0804 00:20:24.730811   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36995
	I0804 00:20:24.730912   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.731145   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.731470   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.731486   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.731733   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.731748   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.731808   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.732034   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.732123   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.732294   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.733677   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0804 00:20:24.734185   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.734257   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.734306   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.734689   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.734709   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.735090   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.735618   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.735643   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.736977   65087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:20:24.736979   65087 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:20:22.853589   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:20:22.859439   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0804 00:20:22.860503   64502 api_server.go:141] control plane version: v1.30.3
	I0804 00:20:22.860521   64502 api_server.go:131] duration metric: took 4.067223392s to wait for apiserver health ...
	I0804 00:20:22.860528   64502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:22.860550   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:22.860598   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:22.901174   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:22.901193   64502 cri.go:89] found id: ""
	I0804 00:20:22.901200   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:22.901246   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.905319   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:22.905404   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:22.948354   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:22.948378   64502 cri.go:89] found id: ""
	I0804 00:20:22.948387   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:22.948438   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.952776   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:22.952863   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:22.989339   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:22.989376   64502 cri.go:89] found id: ""
	I0804 00:20:22.989385   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:22.989443   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.993833   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:22.993909   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:23.035367   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:23.035385   64502 cri.go:89] found id: ""
	I0804 00:20:23.035392   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:23.035434   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.040184   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:23.040259   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:23.078508   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:23.078529   64502 cri.go:89] found id: ""
	I0804 00:20:23.078538   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:23.078601   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.082907   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:23.082969   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:23.120846   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:23.120870   64502 cri.go:89] found id: ""
	I0804 00:20:23.120880   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:23.120943   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.125641   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:23.125702   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:23.172188   64502 cri.go:89] found id: ""
	I0804 00:20:23.172212   64502 logs.go:276] 0 containers: []
	W0804 00:20:23.172224   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:23.172232   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:23.172297   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:23.218188   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:23.218207   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:23.218211   64502 cri.go:89] found id: ""
	I0804 00:20:23.218217   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:23.218268   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.222562   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.226965   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:23.226989   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:23.269384   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:23.269414   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:23.309148   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:23.309178   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:23.356908   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:23.356936   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:23.395352   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:23.395381   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:23.450901   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:23.450925   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:23.488908   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:23.488945   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:23.551780   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:23.551808   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:23.975030   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:23.975070   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:24.035464   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:24.035497   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:24.053118   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:24.053148   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:24.197157   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:24.197189   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:24.254049   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:24.254083   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:24.738735   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:20:24.738757   65087 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:20:24.738785   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.738836   65087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:20:24.738847   65087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:20:24.738860   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.742131   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742459   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742539   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.742569   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742690   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.742968   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.743009   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.743254   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.743142   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.743174   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.743387   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.743447   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.743590   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.743720   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.754983   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0804 00:20:24.755436   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.755877   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.755901   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.756229   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.756454   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.758285   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.758520   65087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:20:24.758537   65087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:20:24.758555   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.761268   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.761715   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.761739   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.762001   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.762211   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.762402   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.762593   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.942323   65087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:20:24.971293   65087 node_ready.go:35] waiting up to 6m0s for node "no-preload-118016" to be "Ready" ...
	I0804 00:20:24.991406   65087 node_ready.go:49] node "no-preload-118016" has status "Ready":"True"
	I0804 00:20:24.991428   65087 node_ready.go:38] duration metric: took 20.101499ms for node "no-preload-118016" to be "Ready" ...
	I0804 00:20:24.991436   65087 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:25.004484   65087 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:25.069407   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:20:25.069437   65087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:20:25.093645   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:20:25.178590   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:20:25.178615   65087 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:20:25.246634   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:20:25.272880   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:20:25.272916   65087 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:20:25.368517   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:20:25.442382   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.442406   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.442668   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:25.442711   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.442717   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.442726   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.442732   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.444425   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.444456   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.444463   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:25.451275   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.451298   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.451605   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.451620   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.451617   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218075   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.218105   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.218391   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.218416   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.218427   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.218435   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.218440   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218702   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218764   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.218786   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.671629   65087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.303057537s)
	I0804 00:20:26.671683   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.671702   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.672010   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.672031   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.672041   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.672049   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.672327   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.672363   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.672378   65087 addons.go:475] Verifying addon metrics-server=true in "no-preload-118016"
	I0804 00:20:26.674374   65087 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0804 00:20:26.803868   64502 system_pods.go:59] 8 kube-system pods found
	I0804 00:20:26.803909   64502 system_pods.go:61] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running
	I0804 00:20:26.803917   64502 system_pods.go:61] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running
	I0804 00:20:26.803923   64502 system_pods.go:61] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running
	I0804 00:20:26.803928   64502 system_pods.go:61] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running
	I0804 00:20:26.803934   64502 system_pods.go:61] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:20:26.803940   64502 system_pods.go:61] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running
	I0804 00:20:26.803948   64502 system_pods.go:61] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:26.803957   64502 system_pods.go:61] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:20:26.803966   64502 system_pods.go:74] duration metric: took 3.943432992s to wait for pod list to return data ...
	I0804 00:20:26.803978   64502 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:26.808760   64502 default_sa.go:45] found service account: "default"
	I0804 00:20:26.808786   64502 default_sa.go:55] duration metric: took 4.797226ms for default service account to be created ...
	I0804 00:20:26.808796   64502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:26.814721   64502 system_pods.go:86] 8 kube-system pods found
	I0804 00:20:26.814753   64502 system_pods.go:89] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running
	I0804 00:20:26.814761   64502 system_pods.go:89] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running
	I0804 00:20:26.814768   64502 system_pods.go:89] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running
	I0804 00:20:26.814774   64502 system_pods.go:89] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running
	I0804 00:20:26.814780   64502 system_pods.go:89] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:20:26.814787   64502 system_pods.go:89] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running
	I0804 00:20:26.814798   64502 system_pods.go:89] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:26.814807   64502 system_pods.go:89] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:20:26.814819   64502 system_pods.go:126] duration metric: took 6.01558ms to wait for k8s-apps to be running ...
	I0804 00:20:26.814828   64502 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:26.814894   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:26.837462   64502 system_svc.go:56] duration metric: took 22.624089ms WaitForService to wait for kubelet
	I0804 00:20:26.837494   64502 kubeadm.go:582] duration metric: took 4m23.086636256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:26.837522   64502 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:26.841517   64502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:26.841548   64502 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:26.841563   64502 node_conditions.go:105] duration metric: took 4.034693ms to run NodePressure ...
	I0804 00:20:26.841576   64502 start.go:241] waiting for startup goroutines ...
	I0804 00:20:26.841586   64502 start.go:246] waiting for cluster config update ...
	I0804 00:20:26.841600   64502 start.go:255] writing updated cluster config ...
	I0804 00:20:26.841939   64502 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:26.908142   64502 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:20:26.910191   64502 out.go:177] * Done! kubectl is now configured to use "embed-certs-877598" cluster and "default" namespace by default
	I0804 00:20:26.675679   65087 addons.go:510] duration metric: took 1.98382947s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0804 00:20:27.012226   65087 pod_ready.go:102] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:29.511886   65087 pod_ready.go:102] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:32.011000   65087 pod_ready.go:92] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:32.011021   65087 pod_ready.go:81] duration metric: took 7.006508451s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:32.011031   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.518235   65087 pod_ready.go:92] pod "kube-apiserver-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.518260   65087 pod_ready.go:81] duration metric: took 1.507219524s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.518270   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.522894   65087 pod_ready.go:92] pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.522916   65087 pod_ready.go:81] duration metric: took 4.639763ms for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.522928   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4jqng" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.527271   65087 pod_ready.go:92] pod "kube-proxy-4jqng" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.527291   65087 pod_ready.go:81] duration metric: took 4.353851ms for pod "kube-proxy-4jqng" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.527303   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.531405   65087 pod_ready.go:92] pod "kube-scheduler-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.531424   65087 pod_ready.go:81] duration metric: took 4.113418ms for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.531433   65087 pod_ready.go:38] duration metric: took 8.539987559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:33.531449   65087 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:20:33.531505   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:20:33.546783   65087 api_server.go:72] duration metric: took 8.854972636s to wait for apiserver process to appear ...
	I0804 00:20:33.546813   65087 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:20:33.546832   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:20:33.551131   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0804 00:20:33.552092   65087 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:20:33.552112   65087 api_server.go:131] duration metric: took 5.292367ms to wait for apiserver health ...
	I0804 00:20:33.552119   65087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:33.557965   65087 system_pods.go:59] 9 kube-system pods found
	I0804 00:20:33.557987   65087 system_pods.go:61] "coredns-6f6b679f8f-gg97s" [28bfbbe9-5051-4674-8b43-f07bfdbc6916] Running
	I0804 00:20:33.557995   65087 system_pods.go:61] "coredns-6f6b679f8f-lj494" [74baae1c-e4c4-4125-aa9d-aeaac74a6ecd] Running
	I0804 00:20:33.558000   65087 system_pods.go:61] "etcd-no-preload-118016" [19ff6386-b0c0-41f7-89fa-fd62e8698b05] Running
	I0804 00:20:33.558005   65087 system_pods.go:61] "kube-apiserver-no-preload-118016" [d791bfcb-00d1-47b8-a9c2-ac8e68af4062] Running
	I0804 00:20:33.558009   65087 system_pods.go:61] "kube-controller-manager-no-preload-118016" [cef9e6fa-7a9d-4d84-8693-216d2eeab428] Running
	I0804 00:20:33.558014   65087 system_pods.go:61] "kube-proxy-4jqng" [c254599f-e58d-4d0a-81c9-1c98c0341f26] Running
	I0804 00:20:33.558018   65087 system_pods.go:61] "kube-scheduler-no-preload-118016" [0deea66f-2336-4371-9492-5af84f3f0fe8] Running
	I0804 00:20:33.558026   65087 system_pods.go:61] "metrics-server-6867b74b74-9gw27" [2f3cdf21-9e68-49b9-a6e0-927465738f23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:33.558035   65087 system_pods.go:61] "storage-provisioner" [07fdb5fa-a2e9-4d3d-8149-25720c320d51] Running
	I0804 00:20:33.558045   65087 system_pods.go:74] duration metric: took 5.921154ms to wait for pod list to return data ...
	I0804 00:20:33.558057   65087 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:33.608139   65087 default_sa.go:45] found service account: "default"
	I0804 00:20:33.608164   65087 default_sa.go:55] duration metric: took 50.097877ms for default service account to be created ...
	I0804 00:20:33.608174   65087 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:33.811878   65087 system_pods.go:86] 9 kube-system pods found
	I0804 00:20:33.811906   65087 system_pods.go:89] "coredns-6f6b679f8f-gg97s" [28bfbbe9-5051-4674-8b43-f07bfdbc6916] Running
	I0804 00:20:33.811912   65087 system_pods.go:89] "coredns-6f6b679f8f-lj494" [74baae1c-e4c4-4125-aa9d-aeaac74a6ecd] Running
	I0804 00:20:33.811916   65087 system_pods.go:89] "etcd-no-preload-118016" [19ff6386-b0c0-41f7-89fa-fd62e8698b05] Running
	I0804 00:20:33.811920   65087 system_pods.go:89] "kube-apiserver-no-preload-118016" [d791bfcb-00d1-47b8-a9c2-ac8e68af4062] Running
	I0804 00:20:33.811925   65087 system_pods.go:89] "kube-controller-manager-no-preload-118016" [cef9e6fa-7a9d-4d84-8693-216d2eeab428] Running
	I0804 00:20:33.811928   65087 system_pods.go:89] "kube-proxy-4jqng" [c254599f-e58d-4d0a-81c9-1c98c0341f26] Running
	I0804 00:20:33.811932   65087 system_pods.go:89] "kube-scheduler-no-preload-118016" [0deea66f-2336-4371-9492-5af84f3f0fe8] Running
	I0804 00:20:33.811939   65087 system_pods.go:89] "metrics-server-6867b74b74-9gw27" [2f3cdf21-9e68-49b9-a6e0-927465738f23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:33.811943   65087 system_pods.go:89] "storage-provisioner" [07fdb5fa-a2e9-4d3d-8149-25720c320d51] Running
	I0804 00:20:33.811950   65087 system_pods.go:126] duration metric: took 203.770479ms to wait for k8s-apps to be running ...
	I0804 00:20:33.811957   65087 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:33.812000   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:33.827146   65087 system_svc.go:56] duration metric: took 15.17867ms WaitForService to wait for kubelet
	I0804 00:20:33.827176   65087 kubeadm.go:582] duration metric: took 9.135367695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:33.827199   65087 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:34.009032   65087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:34.009056   65087 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:34.009076   65087 node_conditions.go:105] duration metric: took 181.872031ms to run NodePressure ...
	I0804 00:20:34.009086   65087 start.go:241] waiting for startup goroutines ...
	I0804 00:20:34.009112   65087 start.go:246] waiting for cluster config update ...
	I0804 00:20:34.009128   65087 start.go:255] writing updated cluster config ...
	I0804 00:20:34.009423   65087 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:34.059796   65087 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0804 00:20:34.061903   65087 out.go:177] * Done! kubectl is now configured to use "no-preload-118016" cluster and "default" namespace by default
	I0804 00:21:00.664979   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:21:00.665100   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:21:00.666810   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:00.666904   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:00.667020   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:00.667150   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:00.667278   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:00.667370   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:00.670254   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:00.670337   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:00.670431   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:00.670537   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:00.670623   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:00.670721   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:00.670788   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:00.670883   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:00.670990   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:00.671079   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:00.671168   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:00.671217   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:00.671265   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:00.671359   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:00.671442   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:00.671529   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:00.671611   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:00.671756   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:00.671856   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:00.671888   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:00.671940   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:00.673410   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:00.673506   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:00.673573   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:00.673627   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:00.673692   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:00.673828   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:00.673876   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:00.673972   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674207   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674283   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674517   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674590   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674752   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674851   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675053   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675173   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675451   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675463   64758 kubeadm.go:310] 
	I0804 00:21:00.675511   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:21:00.675561   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:21:00.675571   64758 kubeadm.go:310] 
	I0804 00:21:00.675614   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:21:00.675656   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:21:00.675787   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:21:00.675797   64758 kubeadm.go:310] 
	I0804 00:21:00.675928   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:21:00.675970   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:21:00.676009   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:21:00.676026   64758 kubeadm.go:310] 
	I0804 00:21:00.676172   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:21:00.676278   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:21:00.676289   64758 kubeadm.go:310] 
	I0804 00:21:00.676393   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:21:00.676466   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:21:00.676532   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:21:00.676609   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:21:00.676632   64758 kubeadm.go:310] 
	W0804 00:21:00.676723   64758 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 00:21:00.676765   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:21:01.138781   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:21:01.154405   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:21:01.164426   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:21:01.164445   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:21:01.164496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:21:01.173853   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:21:01.173907   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:21:01.183634   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:21:01.193283   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:21:01.193342   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:21:01.202427   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.212186   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:21:01.212235   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.222754   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:21:01.232996   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:21:01.233059   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:21:01.243778   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:21:01.319895   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:01.319975   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:01.474907   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:01.475029   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:01.475119   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:01.683624   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:01.685482   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:01.685584   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:01.685691   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:01.685792   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:01.685880   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:01.685991   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:01.686067   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:01.686147   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:01.686285   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:01.686399   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:01.686513   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:01.686600   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:01.686670   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:01.985613   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:02.088377   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:02.336621   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:02.448654   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:02.470140   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:02.471390   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:02.471456   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:02.610840   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:02.612641   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:02.612744   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:02.627044   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:02.629019   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:02.630430   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:02.633022   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:42.635581   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:42.635740   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:42.636036   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:47.636656   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:47.636879   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:57.637900   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:57.638098   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:17.638425   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:17.638634   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637807   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:57.637988   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637996   64758 kubeadm.go:310] 
	I0804 00:22:57.638035   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:22:57.638079   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:22:57.638085   64758 kubeadm.go:310] 
	I0804 00:22:57.638118   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:22:57.638148   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:22:57.638288   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:22:57.638309   64758 kubeadm.go:310] 
	I0804 00:22:57.638426   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:22:57.638507   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:22:57.638619   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:22:57.638640   64758 kubeadm.go:310] 
	I0804 00:22:57.638829   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:22:57.638944   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:22:57.638959   64758 kubeadm.go:310] 
	I0804 00:22:57.639107   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:22:57.639191   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:22:57.639300   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:22:57.639399   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:22:57.639412   64758 kubeadm.go:310] 
	I0804 00:22:57.639782   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:22:57.639904   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:22:57.640012   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:22:57.640091   64758 kubeadm.go:394] duration metric: took 8m3.172057529s to StartCluster
	I0804 00:22:57.640138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:22:57.640202   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:22:57.684020   64758 cri.go:89] found id: ""
	I0804 00:22:57.684054   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.684064   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:22:57.684072   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:22:57.684134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:22:57.722756   64758 cri.go:89] found id: ""
	I0804 00:22:57.722780   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.722788   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:22:57.722793   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:22:57.722851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:22:57.760371   64758 cri.go:89] found id: ""
	I0804 00:22:57.760400   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.760412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:22:57.760419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:22:57.760476   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:22:57.796114   64758 cri.go:89] found id: ""
	I0804 00:22:57.796144   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.796155   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:22:57.796162   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:22:57.796211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:22:57.842148   64758 cri.go:89] found id: ""
	I0804 00:22:57.842179   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.842191   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:22:57.842198   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:22:57.842286   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:22:57.914193   64758 cri.go:89] found id: ""
	I0804 00:22:57.914218   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.914229   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:22:57.914236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:22:57.914290   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:22:57.965944   64758 cri.go:89] found id: ""
	I0804 00:22:57.965973   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.965984   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:22:57.965991   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:22:57.966063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:22:58.003016   64758 cri.go:89] found id: ""
	I0804 00:22:58.003044   64758 logs.go:276] 0 containers: []
	W0804 00:22:58.003055   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:22:58.003066   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:22:58.003093   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:22:58.017277   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:22:58.017304   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:22:58.094192   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:22:58.094214   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:22:58.094227   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:22:58.210901   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:22:58.210944   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:22:58.249283   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:22:58.249317   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 00:22:58.300998   64758 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0804 00:22:58.301054   64758 out.go:239] * 
	W0804 00:22:58.301115   64758 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.301137   64758 out.go:239] * 
	W0804 00:22:58.301978   64758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:22:58.305305   64758 out.go:177] 
	W0804 00:22:58.306722   64758 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.306816   64758 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0804 00:22:58.306848   64758 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0804 00:22:58.308372   64758 out.go:177] 
	
	
	==> CRI-O <==
	Aug 04 00:29:08 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:08.979721726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cc13d26-48be-4ee7-a186-60a6b7970ca9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:08 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:08.979947802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cc13d26-48be-4ee7-a186-60a6b7970ca9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:08 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:08.982734007Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=517c6dd6-07fa-4404-8955-7a47b756773d name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:08 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:08.982909392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=517c6dd6-07fa-4404-8955-7a47b756773d name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:08 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:08.983379513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730571215452955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5714e350b7d3ec8501e537fba968cd32854ab44dd2bc0047b8ddeeba144c84be,PodSandboxId:5a38baa5765133be5a495b625834b0fb776e8732d5a3b0caa1d76245047395e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730550920883010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c62629d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd,PodSandboxId:fb8d88e7d4e578080a0c9996970016a46ba2c16cdd5f8402fde7822a20d85a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730548130296560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8v28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1c179bf-e99a-4b59-b731-dac458e6d6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 45fa8397,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730540358605379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d,PodSandboxId:c09c337b662181fd76cd1123c2d3284f65d1e6922e392d70fb7b8857d4cd41c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730540356741599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zz7fr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e46c77a-ef1c-402d-807b
-8d12b2e17b07,},Annotations:map[string]string{io.kubernetes.container.hash: f5845157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6,PodSandboxId:317b235c3055d1ff6122302eb93c293879f4e52aa54c634243e50be931ca2b7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730535784297938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deef0c779b084ab671cb1
b778374b594,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f,PodSandboxId:b5127a98d6b9381f51fb9df14b0a7a1f26d53fa6d3d428f19585d1c073b9c087,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730535700688189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: ed0fb553a24a63a0aec0b3352959a32c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b,PodSandboxId:5d489d038dfa91b2dafed39c5a4d9a6cdeee0e7e5973760d73cb3c57e7769be6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730535682625570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e60c97373b9bec338962f9277ca078b4,},Annotations:map[string]string{io.kubernetes.container.hash: 1109b6bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37,PodSandboxId:10cc0f79810c2146458bcfd2e2f3cdfc87d9e4177e3d833adad52dcc694f96b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730535620643651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab56790b945f92107bd1638a2fad
4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1b4d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8cc13d26-48be-4ee7-a186-60a6b7970ca9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:08 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:08.983712515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730571215452955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5714e350b7d3ec8501e537fba968cd32854ab44dd2bc0047b8ddeeba144c84be,PodSandboxId:5a38baa5765133be5a495b625834b0fb776e8732d5a3b0caa1d76245047395e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730550920883010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c62629d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd,PodSandboxId:fb8d88e7d4e578080a0c9996970016a46ba2c16cdd5f8402fde7822a20d85a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730548130296560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8v28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1c179bf-e99a-4b59-b731-dac458e6d6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 45fa8397,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d,PodSandboxId:c09c337b662181fd76cd1123c2d3284f65d1e6922e392d70fb7b8857d4cd41c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730540356741599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zz7fr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e46c77a-e
f1c-402d-807b-8d12b2e17b07,},Annotations:map[string]string{io.kubernetes.container.hash: f5845157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6,PodSandboxId:317b235c3055d1ff6122302eb93c293879f4e52aa54c634243e50be931ca2b7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730535784297938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deef0c77
9b084ab671cb1b778374b594,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f,PodSandboxId:b5127a98d6b9381f51fb9df14b0a7a1f26d53fa6d3d428f19585d1c073b9c087,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730535700688189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: ed0fb553a24a63a0aec0b3352959a32c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b,PodSandboxId:5d489d038dfa91b2dafed39c5a4d9a6cdeee0e7e5973760d73cb3c57e7769be6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730535682625570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: e60c97373b9bec338962f9277ca078b4,},Annotations:map[string]string{io.kubernetes.container.hash: 1109b6bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37,PodSandboxId:10cc0f79810c2146458bcfd2e2f3cdfc87d9e4177e3d833adad52dcc694f96b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730535620643651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab56790b945f921
07bd1638a2fad4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1b4d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=517c6dd6-07fa-4404-8955-7a47b756773d name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.002244157Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=39c1e19a-b1e9-4a54-b362-f02563d61b3e name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.002549533Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5a38baa5765133be5a495b625834b0fb776e8732d5a3b0caa1d76245047395e8,Metadata:&PodSandboxMetadata{Name:busybox,Uid:0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722730547793701634,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:15:39.905152573Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fb8d88e7d4e578080a0c9996970016a46ba2c16cdd5f8402fde7822a20d85a46,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-b8v28,Uid:e1c179bf-e99a-4b59-b731-dac458e6d6aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:172273
0547787387323,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8v28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1c179bf-e99a-4b59-b731-dac458e6d6aa,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:15:39.905133328Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b0ceb1b1a5cad5e4a80383ff0b7a4eb132d3274059ec472d91913d3bff54a5ed,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-646qm,Uid:c28af6f2-95c1-44ae-833a-d426ca62a169,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722730545984828055,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-646qm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c28af6f2-95c1-44ae-833a-d426ca62a169,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04
T00:15:39.905140714Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c09c337b662181fd76cd1123c2d3284f65d1e6922e392d70fb7b8857d4cd41c2,Metadata:&PodSandboxMetadata{Name:kube-proxy-zz7fr,Uid:9e46c77a-ef1c-402d-807b-8d12b2e17b07,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722730540225904208,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zz7fr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e46c77a-ef1c-402d-807b-8d12b2e17b07,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:15:39.905146797Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722730540225511728,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-08-04T00:15:39.905150810Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:317b235c3055d1ff6122302eb93c293879f4e52aa54c634243e50be931ca2b7c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-969068,Uid:deef0c779b084ab671cb1b778374b594,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722730535432355740,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deef0c779b084ab671cb1b778374b594,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: deef0c779b084ab671cb1b778374b594,kubernetes.io/config.seen: 2024-08-04T00:15:34.898286529Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b5127a98d6b9381f51fb9df14b0a7a1f26d53fa6d3d428f19585d1c073b9c087,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-96
9068,Uid:ed0fb553a24a63a0aec0b3352959a32c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722730535405437463,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed0fb553a24a63a0aec0b3352959a32c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ed0fb553a24a63a0aec0b3352959a32c,kubernetes.io/config.seen: 2024-08-04T00:15:34.898281943Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5d489d038dfa91b2dafed39c5a4d9a6cdeee0e7e5973760d73cb3c57e7769be6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-969068,Uid:e60c97373b9bec338962f9277ca078b4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722730535401365192,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-def
ault-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e60c97373b9bec338962f9277ca078b4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.132:8444,kubernetes.io/config.hash: e60c97373b9bec338962f9277ca078b4,kubernetes.io/config.seen: 2024-08-04T00:15:34.898287694Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:10cc0f79810c2146458bcfd2e2f3cdfc87d9e4177e3d833adad52dcc694f96b3,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-969068,Uid:05ab56790b945f92107bd1638a2fad4b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722730535396992362,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab56790b945f92107bd1638a2fad4b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-clie
nt-urls: https://192.168.39.132:2379,kubernetes.io/config.hash: 05ab56790b945f92107bd1638a2fad4b,kubernetes.io/config.seen: 2024-08-04T00:15:34.938989694Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=39c1e19a-b1e9-4a54-b362-f02563d61b3e name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.003290515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3d8964a-d2d0-4223-9dc5-75fa2d871605 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.003356150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3d8964a-d2d0-4223-9dc5-75fa2d871605 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.003547491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730571215452955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5714e350b7d3ec8501e537fba968cd32854ab44dd2bc0047b8ddeeba144c84be,PodSandboxId:5a38baa5765133be5a495b625834b0fb776e8732d5a3b0caa1d76245047395e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730550920883010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c62629d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd,PodSandboxId:fb8d88e7d4e578080a0c9996970016a46ba2c16cdd5f8402fde7822a20d85a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730548130296560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8v28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1c179bf-e99a-4b59-b731-dac458e6d6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 45fa8397,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730540358605379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d,PodSandboxId:c09c337b662181fd76cd1123c2d3284f65d1e6922e392d70fb7b8857d4cd41c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730540356741599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zz7fr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e46c77a-ef1c-402d-807b
-8d12b2e17b07,},Annotations:map[string]string{io.kubernetes.container.hash: f5845157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6,PodSandboxId:317b235c3055d1ff6122302eb93c293879f4e52aa54c634243e50be931ca2b7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730535784297938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deef0c779b084ab671cb1
b778374b594,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f,PodSandboxId:b5127a98d6b9381f51fb9df14b0a7a1f26d53fa6d3d428f19585d1c073b9c087,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730535700688189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: ed0fb553a24a63a0aec0b3352959a32c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b,PodSandboxId:5d489d038dfa91b2dafed39c5a4d9a6cdeee0e7e5973760d73cb3c57e7769be6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730535682625570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e60c97373b9bec338962f9277ca078b4,},Annotations:map[string]string{io.kubernetes.container.hash: 1109b6bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37,PodSandboxId:10cc0f79810c2146458bcfd2e2f3cdfc87d9e4177e3d833adad52dcc694f96b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730535620643651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab56790b945f92107bd1638a2fad
4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1b4d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3d8964a-d2d0-4223-9dc5-75fa2d871605 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.032462691Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd21e013-7c8d-46f2-8518-94b88ac9d2ab name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.032560857Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd21e013-7c8d-46f2-8518-94b88ac9d2ab name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.033934735Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=def0705c-860a-4e5b-afac-417db1539495 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.034321732Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731349034299698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=def0705c-860a-4e5b-afac-417db1539495 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.034955950Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b714c482-77ae-4941-8faf-fe99714201e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.035036100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b714c482-77ae-4941-8faf-fe99714201e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.035235022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730571215452955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5714e350b7d3ec8501e537fba968cd32854ab44dd2bc0047b8ddeeba144c84be,PodSandboxId:5a38baa5765133be5a495b625834b0fb776e8732d5a3b0caa1d76245047395e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730550920883010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c62629d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd,PodSandboxId:fb8d88e7d4e578080a0c9996970016a46ba2c16cdd5f8402fde7822a20d85a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730548130296560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8v28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1c179bf-e99a-4b59-b731-dac458e6d6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 45fa8397,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730540358605379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d,PodSandboxId:c09c337b662181fd76cd1123c2d3284f65d1e6922e392d70fb7b8857d4cd41c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730540356741599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zz7fr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e46c77a-ef1c-402d-807b
-8d12b2e17b07,},Annotations:map[string]string{io.kubernetes.container.hash: f5845157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6,PodSandboxId:317b235c3055d1ff6122302eb93c293879f4e52aa54c634243e50be931ca2b7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730535784297938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deef0c779b084ab671cb1
b778374b594,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f,PodSandboxId:b5127a98d6b9381f51fb9df14b0a7a1f26d53fa6d3d428f19585d1c073b9c087,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730535700688189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: ed0fb553a24a63a0aec0b3352959a32c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b,PodSandboxId:5d489d038dfa91b2dafed39c5a4d9a6cdeee0e7e5973760d73cb3c57e7769be6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730535682625570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e60c97373b9bec338962f9277ca078b4,},Annotations:map[string]string{io.kubernetes.container.hash: 1109b6bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37,PodSandboxId:10cc0f79810c2146458bcfd2e2f3cdfc87d9e4177e3d833adad52dcc694f96b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730535620643651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab56790b945f92107bd1638a2fad
4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1b4d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b714c482-77ae-4941-8faf-fe99714201e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.074898329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72e748af-2164-4372-879c-f1d36e05c786 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.075006469Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72e748af-2164-4372-879c-f1d36e05c786 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.076409642Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e95ef77-e80f-4acf-adf7-22b58c654186 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.076941739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731349076913947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e95ef77-e80f-4acf-adf7-22b58c654186 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.077565838Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c200a84-942e-4fdc-9a7c-d6080b092f4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.077636896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c200a84-942e-4fdc-9a7c-d6080b092f4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:09 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:29:09.077887363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730571215452955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5714e350b7d3ec8501e537fba968cd32854ab44dd2bc0047b8ddeeba144c84be,PodSandboxId:5a38baa5765133be5a495b625834b0fb776e8732d5a3b0caa1d76245047395e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730550920883010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c62629d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd,PodSandboxId:fb8d88e7d4e578080a0c9996970016a46ba2c16cdd5f8402fde7822a20d85a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730548130296560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8v28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1c179bf-e99a-4b59-b731-dac458e6d6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 45fa8397,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730540358605379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d,PodSandboxId:c09c337b662181fd76cd1123c2d3284f65d1e6922e392d70fb7b8857d4cd41c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730540356741599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zz7fr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e46c77a-ef1c-402d-807b
-8d12b2e17b07,},Annotations:map[string]string{io.kubernetes.container.hash: f5845157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6,PodSandboxId:317b235c3055d1ff6122302eb93c293879f4e52aa54c634243e50be931ca2b7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730535784297938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deef0c779b084ab671cb1
b778374b594,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f,PodSandboxId:b5127a98d6b9381f51fb9df14b0a7a1f26d53fa6d3d428f19585d1c073b9c087,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730535700688189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: ed0fb553a24a63a0aec0b3352959a32c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b,PodSandboxId:5d489d038dfa91b2dafed39c5a4d9a6cdeee0e7e5973760d73cb3c57e7769be6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730535682625570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e60c97373b9bec338962f9277ca078b4,},Annotations:map[string]string{io.kubernetes.container.hash: 1109b6bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37,PodSandboxId:10cc0f79810c2146458bcfd2e2f3cdfc87d9e4177e3d833adad52dcc694f96b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730535620643651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab56790b945f92107bd1638a2fad
4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1b4d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c200a84-942e-4fdc-9a7c-d6080b092f4c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	34bf0e9504879       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   f98cfca649c08       storage-provisioner
	5714e350b7d3e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   5a38baa576513       busybox
	5cf9a1c37ebd1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   fb8d88e7d4e57       coredns-7db6d8ff4d-b8v28
	53cb13593bed6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   f98cfca649c08       storage-provisioner
	572acf711df5e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   c09c337b66218       kube-proxy-zz7fr
	11c7eacd29c36       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   317b235c3055d       kube-scheduler-default-k8s-diff-port-969068
	f021cd4986aa6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   b5127a98d6b93       kube-controller-manager-default-k8s-diff-port-969068
	0b0897d8c61e8       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   5d489d038dfa9       kube-apiserver-default-k8s-diff-port-969068
	7b181ffd7672a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   10cc0f79810c2       etcd-default-k8s-diff-port-969068
	
	
	==> coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37873 - 45817 "HINFO IN 5416323336611825304.3429816356777871689. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009957744s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-969068
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-969068
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=default-k8s-diff-port-969068
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T00_08_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:08:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-969068
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:29:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:26:21 +0000   Sun, 04 Aug 2024 00:08:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:26:21 +0000   Sun, 04 Aug 2024 00:08:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:26:21 +0000   Sun, 04 Aug 2024 00:08:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:26:21 +0000   Sun, 04 Aug 2024 00:15:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    default-k8s-diff-port-969068
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1731be18e0dd44ebb52e79b8fbffcd93
	  System UUID:                1731be18-e0dd-44eb-b52e-79b8fbffcd93
	  Boot ID:                    ae2bf9db-2992-49a8-8008-f8c73d0c354b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-7db6d8ff4d-b8v28                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-969068                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-969068             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-969068    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-zz7fr                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-969068             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-569cc877fc-646qm                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-969068 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasSufficientPID
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-969068 event: Registered Node default-k8s-diff-port-969068 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-969068 event: Registered Node default-k8s-diff-port-969068 in Controller
	
	
	==> dmesg <==
	[Aug 4 00:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054847] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039815] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.866982] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.578796] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.616932] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.825084] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.065143] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070191] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.213654] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.132525] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.368842] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +4.956402] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +0.062762] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.254526] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +5.646814] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.508736] systemd-fstab-generator[1559]: Ignoring "noauto" option for root device
	[  +1.251637] kauditd_printk_skb: 62 callbacks suppressed
	[  +9.486368] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] <==
	{"level":"warn","ts":"2024-08-04T00:15:53.584572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.278043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2024-08-04T00:15:53.590535Z","caller":"traceutil/trace.go:171","msg":"trace[1096125820] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:624; }","duration":"165.264435ms","start":"2024-08-04T00:15:53.425253Z","end":"2024-08-04T00:15:53.590517Z","steps":["trace[1096125820] 'range keys from in-memory index tree'  (duration: 159.185521ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:15:54.099971Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4444649426301803828,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-04T00:15:54.123014Z","caller":"traceutil/trace.go:171","msg":"trace[642595376] linearizableReadLoop","detail":"{readStateIndex:668; appliedIndex:667; }","duration":"523.121655ms","start":"2024-08-04T00:15:53.599874Z","end":"2024-08-04T00:15:54.122996Z","steps":["trace[642595376] 'read index received'  (duration: 522.776127ms)","trace[642595376] 'applied index is now lower than readState.Index'  (duration: 344.744µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-04T00:15:54.123479Z","caller":"traceutil/trace.go:171","msg":"trace[665176560] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"524.621241ms","start":"2024-08-04T00:15:53.598841Z","end":"2024-08-04T00:15:54.123463Z","steps":["trace[665176560] 'process raft request'  (duration: 523.882729ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:15:54.124305Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:15:53.598823Z","time spent":"524.737877ms","remote":"127.0.0.1:52904","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5632,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-969068\" mod_revision:529 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-969068\" value_size:5564 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-969068\" > >"}
	{"level":"warn","ts":"2024-08-04T00:15:54.124653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"524.761906ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-969068\" ","response":"range_response_count:1 size:5647"}
	{"level":"info","ts":"2024-08-04T00:15:54.125348Z","caller":"traceutil/trace.go:171","msg":"trace[1598349519] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-969068; range_end:; response_count:1; response_revision:625; }","duration":"525.476182ms","start":"2024-08-04T00:15:53.599849Z","end":"2024-08-04T00:15:54.125325Z","steps":["trace[1598349519] 'agreement among raft nodes before linearized reading'  (duration: 524.746653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:15:54.12509Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"518.494219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5550"}
	{"level":"info","ts":"2024-08-04T00:15:54.127307Z","caller":"traceutil/trace.go:171","msg":"trace[519013044] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:625; }","duration":"520.726746ms","start":"2024-08-04T00:15:53.606558Z","end":"2024-08-04T00:15:54.127284Z","steps":["trace[519013044] 'agreement among raft nodes before linearized reading'  (duration: 518.437433ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:15:54.127433Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:15:53.606547Z","time spent":"520.875305ms","remote":"127.0.0.1:52898","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":5572,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
	{"level":"warn","ts":"2024-08-04T00:15:54.127247Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:15:53.599835Z","time spent":"527.394443ms","remote":"127.0.0.1:52904","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":5669,"request content":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-969068\" "}
	{"level":"warn","ts":"2024-08-04T00:15:54.645505Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":4444649426301803835,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-04T00:15:54.725431Z","caller":"traceutil/trace.go:171","msg":"trace[56099101] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"582.446417ms","start":"2024-08-04T00:15:54.142967Z","end":"2024-08-04T00:15:54.725414Z","steps":["trace[56099101] 'process raft request'  (duration: 535.083918ms)","trace[56099101] 'compare'  (duration: 46.475139ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-04T00:15:54.725584Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:15:54.142945Z","time spent":"582.586109ms","remote":"127.0.0.1:52904","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5460,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-969068\" mod_revision:625 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-969068\" value_size:5392 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-969068\" > >"}
	{"level":"info","ts":"2024-08-04T00:15:54.735973Z","caller":"traceutil/trace.go:171","msg":"trace[492714136] linearizableReadLoop","detail":"{readStateIndex:670; appliedIndex:668; }","duration":"590.999765ms","start":"2024-08-04T00:15:54.14496Z","end":"2024-08-04T00:15:54.73596Z","steps":["trace[492714136] 'read index received'  (duration: 533.100164ms)","trace[492714136] 'applied index is now lower than readState.Index'  (duration: 57.898836ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-04T00:15:54.736302Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"591.279919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-969068\" ","response":"range_response_count:1 size:5550"}
	{"level":"info","ts":"2024-08-04T00:15:54.736348Z","caller":"traceutil/trace.go:171","msg":"trace[625963288] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-969068; range_end:; response_count:1; response_revision:626; }","duration":"591.411895ms","start":"2024-08-04T00:15:54.144928Z","end":"2024-08-04T00:15:54.73634Z","steps":["trace[625963288] 'agreement among raft nodes before linearized reading'  (duration: 591.248125ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:15:54.736372Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:15:54.144918Z","time spent":"591.447877ms","remote":"127.0.0.1:52898","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5572,"request content":"key:\"/registry/minions/default-k8s-diff-port-969068\" "}
	{"level":"warn","ts":"2024-08-04T00:15:54.736501Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"584.116478ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-08-04T00:15:54.736541Z","caller":"traceutil/trace.go:171","msg":"trace[875039070] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:626; }","duration":"584.349501ms","start":"2024-08-04T00:15:54.152186Z","end":"2024-08-04T00:15:54.736536Z","steps":["trace[875039070] 'agreement among raft nodes before linearized reading'  (duration: 584.285476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:15:54.736563Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:15:54.152164Z","time spent":"584.394016ms","remote":"127.0.0.1:52916","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":229,"request content":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" "}
	{"level":"info","ts":"2024-08-04T00:25:38.108075Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":878}
	{"level":"info","ts":"2024-08-04T00:25:38.117909Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":878,"took":"9.48251ms","hash":3232677356,"current-db-size-bytes":2617344,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2617344,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-08-04T00:25:38.117968Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3232677356,"revision":878,"compact-revision":-1}
	
	
	==> kernel <==
	 00:29:09 up 14 min,  0 users,  load average: 0.21, 0.09, 0.08
	Linux default-k8s-diff-port-969068 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] <==
	I0804 00:23:40.408064       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:25:39.407271       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:25:39.407387       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0804 00:25:40.407562       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:25:40.407617       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0804 00:25:40.407626       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:25:40.407664       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:25:40.407711       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0804 00:25:40.408835       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:26:40.407957       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:26:40.408039       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0804 00:26:40.408049       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:26:40.409271       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:26:40.409338       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0804 00:26:40.409346       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:28:40.408948       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:28:40.409224       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0804 00:28:40.409254       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:28:40.410118       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:28:40.410225       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0804 00:28:40.410310       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] <==
	I0804 00:23:26.292608       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:23:55.793710       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:23:56.301487       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:24:25.798853       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:24:26.310893       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:24:55.804954       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:24:56.318852       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:25:25.809563       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:25:26.326554       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:25:55.815623       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:25:56.334235       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:26:25.820570       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:26:26.342042       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:26:55.825753       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:26:56.352128       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:26:58.996642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="345.397µs"
	I0804 00:27:10.992875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="113.234µs"
	E0804 00:27:25.830585       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:27:26.361012       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:27:55.836181       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:27:56.370889       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:28:25.842362       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:28:26.379322       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:28:55.853673       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:28:56.387045       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] <==
	I0804 00:15:40.677126       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:15:40.698470       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.132"]
	I0804 00:15:40.777445       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:15:40.777594       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:15:40.777702       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:15:40.783055       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:15:40.783316       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:15:40.783536       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:15:40.787206       1 config.go:192] "Starting service config controller"
	I0804 00:15:40.787984       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:15:40.788568       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:15:40.788668       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:15:40.789253       1 config.go:319] "Starting node config controller"
	I0804 00:15:40.789351       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:15:40.888974       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:15:40.889075       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:15:40.889526       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] <==
	I0804 00:15:36.857728       1 serving.go:380] Generated self-signed cert in-memory
	W0804 00:15:39.394229       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 00:15:39.394266       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 00:15:39.394276       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 00:15:39.394284       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 00:15:39.437890       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:15:39.438008       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:15:39.448044       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:15:39.448083       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:15:39.448576       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:15:39.449255       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:15:39.549918       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 04 00:26:45 default-k8s-diff-port-969068 kubelet[932]: E0804 00:26:45.991435     932 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 04 00:26:45 default-k8s-diff-port-969068 kubelet[932]: E0804 00:26:45.991518     932 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 04 00:26:45 default-k8s-diff-port-969068 kubelet[932]: E0804 00:26:45.991721     932 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dz29k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-646qm_kube-system(c28af6f2-95c1-44ae-833a-d426ca62a169): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 04 00:26:45 default-k8s-diff-port-969068 kubelet[932]: E0804 00:26:45.991754     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:26:58 default-k8s-diff-port-969068 kubelet[932]: E0804 00:26:58.976351     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:27:10 default-k8s-diff-port-969068 kubelet[932]: E0804 00:27:10.975455     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:27:23 default-k8s-diff-port-969068 kubelet[932]: E0804 00:27:23.975200     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:27:34 default-k8s-diff-port-969068 kubelet[932]: E0804 00:27:34.991941     932 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:27:34 default-k8s-diff-port-969068 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:27:34 default-k8s-diff-port-969068 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:27:34 default-k8s-diff-port-969068 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:27:34 default-k8s-diff-port-969068 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:27:35 default-k8s-diff-port-969068 kubelet[932]: E0804 00:27:35.975862     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:27:48 default-k8s-diff-port-969068 kubelet[932]: E0804 00:27:48.975406     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:27:59 default-k8s-diff-port-969068 kubelet[932]: E0804 00:27:59.975710     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:28:10 default-k8s-diff-port-969068 kubelet[932]: E0804 00:28:10.975586     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:28:23 default-k8s-diff-port-969068 kubelet[932]: E0804 00:28:23.975366     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:28:34 default-k8s-diff-port-969068 kubelet[932]: E0804 00:28:34.991120     932 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:28:34 default-k8s-diff-port-969068 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:28:34 default-k8s-diff-port-969068 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:28:34 default-k8s-diff-port-969068 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:28:34 default-k8s-diff-port-969068 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:28:36 default-k8s-diff-port-969068 kubelet[932]: E0804 00:28:36.976054     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:28:50 default-k8s-diff-port-969068 kubelet[932]: E0804 00:28:50.977429     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:29:01 default-k8s-diff-port-969068 kubelet[932]: E0804 00:29:01.975647     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	
	
	==> storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] <==
	I0804 00:16:11.328778       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0804 00:16:11.337366       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0804 00:16:11.337569       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0804 00:16:28.737069       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0804 00:16:28.737277       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-969068_3c0407ad-5d35-410d-833f-6bff51709cbd!
	I0804 00:16:28.738433       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db5a6da5-0284-4a8e-a871-d4eb2be7e069", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-969068_3c0407ad-5d35-410d-833f-6bff51709cbd became leader
	I0804 00:16:28.837887       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-969068_3c0407ad-5d35-410d-833f-6bff51709cbd!
	
	
	==> storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] <==
	I0804 00:15:40.576307       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0804 00:16:10.580730       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-969068 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-646qm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-969068 describe pod metrics-server-569cc877fc-646qm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-969068 describe pod metrics-server-569cc877fc-646qm: exit status 1 (62.823051ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-646qm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-969068 describe pod metrics-server-569cc877fc-646qm: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-877598 -n embed-certs-877598
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-04 00:29:27.474167145 +0000 UTC m=+6103.416411181
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-877598 -n embed-certs-877598
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-877598 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-877598 logs -n 25: (2.173060347s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-551054                                 | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302198                           | kubernetes-upgrade-302198    | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-551054 sudo                            | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-551054                                 | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-877598            | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-705918                              | cert-expiration-705918       | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-423330 | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	|         | disable-driver-mounts-423330                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:09 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-118016             | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC | 04 Aug 24 00:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-576210        | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-969068  | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-877598                 | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-576210             | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-118016                  | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-969068       | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:11:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:11:52.361065   65441 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:11:52.361334   65441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:11:52.361345   65441 out.go:304] Setting ErrFile to fd 2...
	I0804 00:11:52.361349   65441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:11:52.361548   65441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:11:52.362087   65441 out.go:298] Setting JSON to false
	I0804 00:11:52.363002   65441 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6856,"bootTime":1722723456,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:11:52.363061   65441 start.go:139] virtualization: kvm guest
	I0804 00:11:52.365345   65441 out.go:177] * [default-k8s-diff-port-969068] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:11:52.367170   65441 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:11:52.367161   65441 notify.go:220] Checking for updates...
	I0804 00:11:52.369837   65441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:11:52.371134   65441 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:11:52.372226   65441 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:11:52.373445   65441 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:11:52.374802   65441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:11:52.376375   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:11:52.376787   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:11:52.376859   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:11:52.392495   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0804 00:11:52.392954   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:11:52.393477   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:11:52.393497   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:11:52.393883   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:11:52.394048   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:11:52.394313   65441 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:11:52.394606   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:11:52.394638   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:11:52.409194   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0804 00:11:52.409594   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:11:52.410032   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:11:52.410050   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:11:52.410358   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:11:52.410529   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:11:52.445480   65441 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:11:52.446679   65441 start.go:297] selected driver: kvm2
	I0804 00:11:52.446694   65441 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:11:52.446827   65441 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:11:52.447792   65441 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:11:52.447886   65441 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:11:52.462893   65441 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:11:52.463275   65441 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:11:52.463306   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:11:52.463316   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:11:52.463368   65441 start.go:340] cluster config:
	{Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:11:52.463486   65441 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:11:52.465374   65441 out.go:177] * Starting "default-k8s-diff-port-969068" primary control-plane node in "default-k8s-diff-port-969068" cluster
	I0804 00:11:52.466656   65441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:11:52.466698   65441 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:11:52.466710   65441 cache.go:56] Caching tarball of preloaded images
	I0804 00:11:52.466790   65441 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:11:52.466801   65441 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:11:52.466901   65441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:11:52.467100   65441 start.go:360] acquireMachinesLock for default-k8s-diff-port-969068: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:11:55.809602   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:11:58.881666   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:04.961665   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:08.033617   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:14.113634   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:17.185623   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:23.265618   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:26.337594   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:32.417583   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:35.489705   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:41.569654   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:44.641653   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:50.721640   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:53.793649   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:59.873643   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:02.945676   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:09.025652   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:12.097647   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:18.177740   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:21.249606   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:27.329637   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:30.401648   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:36.481588   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:39.553638   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:45.633633   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:48.705646   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:54.785636   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:57.857662   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:03.937643   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:07.009557   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:13.089694   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:16.161619   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:22.241650   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:25.313612   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:28.318586   64758 start.go:364] duration metric: took 4m16.324186239s to acquireMachinesLock for "old-k8s-version-576210"
	I0804 00:14:28.318635   64758 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:28.318646   64758 fix.go:54] fixHost starting: 
	I0804 00:14:28.319092   64758 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:28.319128   64758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:28.334850   64758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0804 00:14:28.335321   64758 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:28.335817   64758 main.go:141] libmachine: Using API Version  1
	I0804 00:14:28.335848   64758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:28.336204   64758 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:28.336435   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:28.336622   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetState
	I0804 00:14:28.338146   64758 fix.go:112] recreateIfNeeded on old-k8s-version-576210: state=Stopped err=<nil>
	I0804 00:14:28.338166   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	W0804 00:14:28.338322   64758 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:28.340640   64758 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-576210" ...
	I0804 00:14:28.315605   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:28.315642   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:14:28.316035   64502 buildroot.go:166] provisioning hostname "embed-certs-877598"
	I0804 00:14:28.316073   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:14:28.316325   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:14:28.318440   64502 machine.go:97] duration metric: took 4m37.42620041s to provisionDockerMachine
	I0804 00:14:28.318477   64502 fix.go:56] duration metric: took 4m37.448052873s for fixHost
	I0804 00:14:28.318485   64502 start.go:83] releasing machines lock for "embed-certs-877598", held for 4m37.44807127s
	W0804 00:14:28.318509   64502 start.go:714] error starting host: provision: host is not running
	W0804 00:14:28.318594   64502 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0804 00:14:28.318606   64502 start.go:729] Will try again in 5 seconds ...
	I0804 00:14:28.342217   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .Start
	I0804 00:14:28.342401   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring networks are active...
	I0804 00:14:28.343274   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network default is active
	I0804 00:14:28.343761   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network mk-old-k8s-version-576210 is active
	I0804 00:14:28.344268   64758 main.go:141] libmachine: (old-k8s-version-576210) Getting domain xml...
	I0804 00:14:28.345080   64758 main.go:141] libmachine: (old-k8s-version-576210) Creating domain...
	I0804 00:14:29.575420   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting to get IP...
	I0804 00:14:29.576307   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.576754   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.576842   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.576711   66003 retry.go:31] will retry after 272.821874ms: waiting for machine to come up
	I0804 00:14:29.851363   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.851951   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.851976   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.851895   66003 retry.go:31] will retry after 247.116514ms: waiting for machine to come up
	I0804 00:14:30.100479   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.100883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.100916   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.100833   66003 retry.go:31] will retry after 353.251065ms: waiting for machine to come up
	I0804 00:14:30.455526   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.455975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.456004   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.455933   66003 retry.go:31] will retry after 558.071575ms: waiting for machine to come up
	I0804 00:14:31.015539   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.015974   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.016000   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.015917   66003 retry.go:31] will retry after 514.757536ms: waiting for machine to come up
	I0804 00:14:31.532799   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.533232   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.533250   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.533186   66003 retry.go:31] will retry after 607.548546ms: waiting for machine to come up
	I0804 00:14:33.318807   64502 start.go:360] acquireMachinesLock for embed-certs-877598: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:14:32.142162   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:32.142658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:32.142693   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:32.142610   66003 retry.go:31] will retry after 897.977595ms: waiting for machine to come up
	I0804 00:14:33.042628   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:33.043002   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:33.043028   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:33.042966   66003 retry.go:31] will retry after 1.094117762s: waiting for machine to come up
	I0804 00:14:34.138946   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:34.139459   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:34.139485   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:34.139414   66003 retry.go:31] will retry after 1.435055372s: waiting for machine to come up
	I0804 00:14:35.576253   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:35.576603   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:35.576625   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:35.576547   66003 retry.go:31] will retry after 1.688006591s: waiting for machine to come up
	I0804 00:14:37.265928   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:37.266429   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:37.266456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:37.266371   66003 retry.go:31] will retry after 2.356818801s: waiting for machine to come up
	I0804 00:14:39.624408   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:39.624832   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:39.624863   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:39.624775   66003 retry.go:31] will retry after 2.41856098s: waiting for machine to come up
	I0804 00:14:46.442402   65087 start.go:364] duration metric: took 3m44.405576801s to acquireMachinesLock for "no-preload-118016"
	I0804 00:14:46.442459   65087 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:46.442469   65087 fix.go:54] fixHost starting: 
	I0804 00:14:46.442938   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:46.442975   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:46.459944   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0804 00:14:46.460375   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:46.460851   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:14:46.460871   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:46.461211   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:46.461402   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:14:46.461538   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:14:46.463097   65087 fix.go:112] recreateIfNeeded on no-preload-118016: state=Stopped err=<nil>
	I0804 00:14:46.463126   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	W0804 00:14:46.463282   65087 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:46.465711   65087 out.go:177] * Restarting existing kvm2 VM for "no-preload-118016" ...
	I0804 00:14:42.044498   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:42.044855   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:42.044882   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:42.044822   66003 retry.go:31] will retry after 3.111190148s: waiting for machine to come up
	I0804 00:14:45.158161   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.158688   64758 main.go:141] libmachine: (old-k8s-version-576210) Found IP for machine: 192.168.72.154
	I0804 00:14:45.158709   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserving static IP address...
	I0804 00:14:45.158719   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has current primary IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.159112   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.159138   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | skip adding static IP to network mk-old-k8s-version-576210 - found existing host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"}
	I0804 00:14:45.159151   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserved static IP address: 192.168.72.154
	I0804 00:14:45.159163   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting for SSH to be available...
	I0804 00:14:45.159172   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Getting to WaitForSSH function...
	I0804 00:14:45.161469   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161782   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.161812   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161936   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH client type: external
	I0804 00:14:45.161975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa (-rw-------)
	I0804 00:14:45.162015   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:14:45.162034   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | About to run SSH command:
	I0804 00:14:45.162044   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | exit 0
	I0804 00:14:45.281546   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | SSH cmd err, output: <nil>: 
	I0804 00:14:45.281859   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetConfigRaw
	I0804 00:14:45.282574   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.284998   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285386   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.285414   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285614   64758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/config.json ...
	I0804 00:14:45.285806   64758 machine.go:94] provisionDockerMachine start ...
	I0804 00:14:45.285823   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:45.286098   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.288285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288640   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.288668   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288753   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.288931   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289088   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289253   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.289426   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.289628   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.289640   64758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:14:45.386001   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:14:45.386036   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386325   64758 buildroot.go:166] provisioning hostname "old-k8s-version-576210"
	I0804 00:14:45.386348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386536   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.389316   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389718   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.389739   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389948   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.390122   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390285   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390415   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.390557   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.390758   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.390776   64758 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-576210 && echo "old-k8s-version-576210" | sudo tee /etc/hostname
	I0804 00:14:45.499644   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-576210
	
	I0804 00:14:45.499695   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.502583   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.502935   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.502959   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.503123   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.503318   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503456   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503570   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.503729   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.503898   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.503915   64758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-576210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-576210/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-576210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:14:45.606971   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:45.607003   64758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:14:45.607045   64758 buildroot.go:174] setting up certificates
	I0804 00:14:45.607053   64758 provision.go:84] configureAuth start
	I0804 00:14:45.607062   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.607327   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.610009   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610378   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.610407   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610545   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.612549   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.612876   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.612908   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.613071   64758 provision.go:143] copyHostCerts
	I0804 00:14:45.613134   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:14:45.613147   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:14:45.613231   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:14:45.613343   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:14:45.613368   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:14:45.613410   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:14:45.613491   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:14:45.613501   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:14:45.613535   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:14:45.613609   64758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-576210 san=[127.0.0.1 192.168.72.154 localhost minikube old-k8s-version-576210]
	I0804 00:14:45.794221   64758 provision.go:177] copyRemoteCerts
	I0804 00:14:45.794276   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:14:45.794299   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.796859   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797182   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.797225   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.797555   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.797687   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.797804   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:45.875704   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:14:45.903765   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0804 00:14:45.930101   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:14:45.955639   64758 provision.go:87] duration metric: took 348.556108ms to configureAuth
	I0804 00:14:45.955668   64758 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:14:45.955874   64758 config.go:182] Loaded profile config "old-k8s-version-576210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:14:45.955960   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.958487   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958835   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.958950   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958970   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.959193   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.959616   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.959789   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.959810   64758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:14:46.217683   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:14:46.217725   64758 machine.go:97] duration metric: took 931.901933ms to provisionDockerMachine
	I0804 00:14:46.217742   64758 start.go:293] postStartSetup for "old-k8s-version-576210" (driver="kvm2")
	I0804 00:14:46.217758   64758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:14:46.217787   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.218127   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:14:46.218151   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.220834   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221148   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.221170   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221342   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.221576   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.221733   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.221867   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.300102   64758 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:14:46.304434   64758 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:14:46.304464   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:14:46.304538   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:14:46.304631   64758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:14:46.304747   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:14:46.314378   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:46.339057   64758 start.go:296] duration metric: took 121.299069ms for postStartSetup
	I0804 00:14:46.339105   64758 fix.go:56] duration metric: took 18.020458894s for fixHost
	I0804 00:14:46.339129   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.341883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342258   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.342285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.342688   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342856   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342992   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.343161   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:46.343385   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:46.343400   64758 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:14:46.442247   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730486.414818212
	
	I0804 00:14:46.442275   64758 fix.go:216] guest clock: 1722730486.414818212
	I0804 00:14:46.442288   64758 fix.go:229] Guest: 2024-08-04 00:14:46.414818212 +0000 UTC Remote: 2024-08-04 00:14:46.339109981 +0000 UTC m=+274.490542023 (delta=75.708231ms)
	I0804 00:14:46.442313   64758 fix.go:200] guest clock delta is within tolerance: 75.708231ms
	I0804 00:14:46.442319   64758 start.go:83] releasing machines lock for "old-k8s-version-576210", held for 18.123699316s
	I0804 00:14:46.442347   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.442656   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:46.445456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.445865   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.445892   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.446069   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446577   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446743   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446816   64758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:14:46.446850   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.446965   64758 ssh_runner.go:195] Run: cat /version.json
	I0804 00:14:46.446987   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.449576   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449794   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449953   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.449983   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450178   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450265   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.450317   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450384   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450520   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450605   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450667   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450733   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.450780   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450910   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.534686   64758 ssh_runner.go:195] Run: systemctl --version
	I0804 00:14:46.554270   64758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:14:46.708220   64758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:14:46.714541   64758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:14:46.714607   64758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:14:46.731642   64758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:14:46.731668   64758 start.go:495] detecting cgroup driver to use...
	I0804 00:14:46.731739   64758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:14:46.748782   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:14:46.763556   64758 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:14:46.763640   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:14:46.778075   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:14:46.793133   64758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:14:46.466927   65087 main.go:141] libmachine: (no-preload-118016) Calling .Start
	I0804 00:14:46.467081   65087 main.go:141] libmachine: (no-preload-118016) Ensuring networks are active...
	I0804 00:14:46.467696   65087 main.go:141] libmachine: (no-preload-118016) Ensuring network default is active
	I0804 00:14:46.468023   65087 main.go:141] libmachine: (no-preload-118016) Ensuring network mk-no-preload-118016 is active
	I0804 00:14:46.468344   65087 main.go:141] libmachine: (no-preload-118016) Getting domain xml...
	I0804 00:14:46.468932   65087 main.go:141] libmachine: (no-preload-118016) Creating domain...
	I0804 00:14:46.918377   64758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:14:47.059683   64758 docker.go:233] disabling docker service ...
	I0804 00:14:47.059753   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:14:47.074819   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:14:47.092184   64758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:14:47.235274   64758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:14:47.357937   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:14:47.375273   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:14:47.395182   64758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0804 00:14:47.395236   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.407036   64758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:14:47.407092   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.418562   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.434481   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.447488   64758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:14:47.460242   64758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:14:47.471089   64758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:14:47.471143   64758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:14:47.486698   64758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:14:47.498754   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:47.630867   64758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:14:47.796598   64758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:14:47.796690   64758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:14:47.802302   64758 start.go:563] Will wait 60s for crictl version
	I0804 00:14:47.802364   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:47.806368   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:14:47.847588   64758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:14:47.847679   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.877936   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.908229   64758 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0804 00:14:47.909635   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:47.912658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913102   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:47.913130   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913438   64758 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0804 00:14:47.917910   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:47.931201   64758 kubeadm.go:883] updating cluster {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:14:47.931318   64758 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:14:47.931381   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:47.980001   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:47.980071   64758 ssh_runner.go:195] Run: which lz4
	I0804 00:14:47.984277   64758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:14:47.988781   64758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:14:47.988810   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0804 00:14:49.706968   64758 crio.go:462] duration metric: took 1.722721175s to copy over tarball
	I0804 00:14:49.707059   64758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:14:47.715321   65087 main.go:141] libmachine: (no-preload-118016) Waiting to get IP...
	I0804 00:14:47.716397   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:47.716853   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:47.716889   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:47.716820   66120 retry.go:31] will retry after 187.841432ms: waiting for machine to come up
	I0804 00:14:47.906481   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:47.906984   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:47.907018   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:47.906942   66120 retry.go:31] will retry after 389.569097ms: waiting for machine to come up
	I0804 00:14:48.298691   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:48.299997   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:48.300021   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:48.299947   66120 retry.go:31] will retry after 382.905254ms: waiting for machine to come up
	I0804 00:14:48.684628   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:48.685095   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:48.685127   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:48.685066   66120 retry.go:31] will retry after 526.267085ms: waiting for machine to come up
	I0804 00:14:49.213459   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:49.214180   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:49.214203   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:49.214142   66120 retry.go:31] will retry after 666.253139ms: waiting for machine to come up
	I0804 00:14:49.882141   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:49.882610   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:49.882639   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:49.882560   66120 retry.go:31] will retry after 776.560525ms: waiting for machine to come up
	I0804 00:14:50.660679   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:50.661149   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:50.661177   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:50.661105   66120 retry.go:31] will retry after 825.927722ms: waiting for machine to come up
	I0804 00:14:51.488562   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:51.488937   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:51.488964   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:51.488894   66120 retry.go:31] will retry after 1.210535859s: waiting for machine to come up
	I0804 00:14:52.511242   64758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.804147671s)
	I0804 00:14:52.511275   64758 crio.go:469] duration metric: took 2.804279705s to extract the tarball
	I0804 00:14:52.511285   64758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:14:52.553905   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:52.587405   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:52.587429   64758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:14:52.587496   64758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.587513   64758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.587550   64758 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.587551   64758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.587554   64758 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.587567   64758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.587570   64758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.587577   64758 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.589240   64758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.589239   64758 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.589247   64758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.589211   64758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.589287   64758 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589579   64758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.742969   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.766505   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.782813   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0804 00:14:52.788509   64758 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0804 00:14:52.788553   64758 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.788598   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.823108   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.829531   64758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0804 00:14:52.829577   64758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.829648   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.858209   64758 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0804 00:14:52.858238   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.858245   64758 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0804 00:14:52.858288   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.888665   64758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0804 00:14:52.888717   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.888748   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0804 00:14:52.888717   64758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.888794   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.918127   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.921386   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0804 00:14:52.929839   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.977866   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0804 00:14:52.977919   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.977960   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0804 00:14:52.994379   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.003198   64758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0804 00:14:53.003233   64758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.003273   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.056310   64758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0804 00:14:53.056338   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0804 00:14:53.056357   64758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.056403   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.062077   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.062119   64758 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0804 00:14:53.062161   64758 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.062206   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.064260   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.114709   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0804 00:14:53.114758   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.118375   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0804 00:14:53.147635   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0804 00:14:53.497155   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:53.647242   64758 cache_images.go:92] duration metric: took 1.059794593s to LoadCachedImages
	W0804 00:14:53.647353   64758 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0804 00:14:53.647370   64758 kubeadm.go:934] updating node { 192.168.72.154 8443 v1.20.0 crio true true} ...
	I0804 00:14:53.647507   64758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-576210 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:14:53.647586   64758 ssh_runner.go:195] Run: crio config
	I0804 00:14:53.710377   64758 cni.go:84] Creating CNI manager for ""
	I0804 00:14:53.710399   64758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:14:53.710411   64758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:14:53.710437   64758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.154 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-576210 NodeName:old-k8s-version-576210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0804 00:14:53.710583   64758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-576210"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:14:53.710661   64758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0804 00:14:53.721942   64758 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:14:53.722005   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:14:53.732623   64758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0804 00:14:53.749878   64758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:14:53.767147   64758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0804 00:14:53.785522   64758 ssh_runner.go:195] Run: grep 192.168.72.154	control-plane.minikube.internal$ /etc/hosts
	I0804 00:14:53.789438   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:53.802152   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:53.934508   64758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:14:53.952247   64758 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210 for IP: 192.168.72.154
	I0804 00:14:53.952280   64758 certs.go:194] generating shared ca certs ...
	I0804 00:14:53.952301   64758 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:53.952470   64758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:14:53.952523   64758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:14:53.952536   64758 certs.go:256] generating profile certs ...
	I0804 00:14:53.952658   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.key
	I0804 00:14:53.952730   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key.5357f842
	I0804 00:14:53.952783   64758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key
	I0804 00:14:53.952948   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:14:53.953000   64758 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:14:53.953013   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:14:53.953048   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:14:53.953084   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:14:53.953114   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:14:53.953191   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:53.954013   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:14:54.001446   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:14:54.029628   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:14:54.062713   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:14:54.090711   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0804 00:14:54.117970   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:14:54.163691   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:14:54.190151   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:14:54.219334   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:14:54.244677   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:14:54.269795   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:14:54.294949   64758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:14:54.312330   64758 ssh_runner.go:195] Run: openssl version
	I0804 00:14:54.318320   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:14:54.328932   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333686   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333737   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.341330   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:14:54.356008   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:14:54.368966   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373896   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373954   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.379770   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:14:54.390903   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:14:54.402637   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407296   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407362   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.413215   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:14:54.424473   64758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:14:54.429673   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:14:54.436038   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:14:54.442091   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:14:54.448507   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:14:54.455421   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:14:54.461969   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:14:54.468042   64758 kubeadm.go:392] StartCluster: {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:14:54.468151   64758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:14:54.468208   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.508109   64758 cri.go:89] found id: ""
	I0804 00:14:54.508183   64758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:14:54.518712   64758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:14:54.518736   64758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:14:54.518788   64758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:14:54.528545   64758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:14:54.529780   64758 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-576210" does not appear in /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:14:54.530411   64758 kubeconfig.go:62] /home/jenkins/minikube-integration/19364-9607/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-576210" cluster setting kubeconfig missing "old-k8s-version-576210" context setting]
	I0804 00:14:54.531316   64758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:54.550431   64758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:14:54.561047   64758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.154
	I0804 00:14:54.561086   64758 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:14:54.561108   64758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:14:54.561163   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.597213   64758 cri.go:89] found id: ""
	I0804 00:14:54.597282   64758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:14:54.612914   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:14:54.622533   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:14:54.622562   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:14:54.622613   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:14:54.632746   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:14:54.632812   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:14:54.642197   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:14:54.651204   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:14:54.651268   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:14:54.660496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.669448   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:14:54.669512   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.678773   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:14:54.687854   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:14:54.687902   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:14:54.697066   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:14:54.707036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:54.840553   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.551919   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.790500   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.898210   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.995621   64758 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:14:55.995711   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:56.496072   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:52.701200   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:52.701574   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:52.701598   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:52.701547   66120 retry.go:31] will retry after 1.518623613s: waiting for machine to come up
	I0804 00:14:54.221367   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:54.221886   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:54.221916   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:54.221835   66120 retry.go:31] will retry after 1.869121058s: waiting for machine to come up
	I0804 00:14:56.092101   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:56.092527   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:56.092550   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:56.092488   66120 retry.go:31] will retry after 2.071227436s: waiting for machine to come up
	I0804 00:14:56.995965   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.496285   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.995805   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.496549   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.996224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.496360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.996056   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:01.496435   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.166383   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:58.166760   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:58.166807   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:58.166729   66120 retry.go:31] will retry after 2.352991709s: waiting for machine to come up
	I0804 00:15:00.522153   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:00.522630   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:15:00.522657   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:15:00.522584   66120 retry.go:31] will retry after 3.326179831s: waiting for machine to come up
	I0804 00:15:05.170439   65441 start.go:364] duration metric: took 3m12.703297591s to acquireMachinesLock for "default-k8s-diff-port-969068"
	I0804 00:15:05.170512   65441 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:15:05.170520   65441 fix.go:54] fixHost starting: 
	I0804 00:15:05.170935   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:05.170974   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:05.188546   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0804 00:15:05.188997   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:05.189494   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:05.189518   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:05.189933   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:05.190132   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:05.190276   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:05.191653   65441 fix.go:112] recreateIfNeeded on default-k8s-diff-port-969068: state=Stopped err=<nil>
	I0804 00:15:05.191684   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	W0804 00:15:05.191834   65441 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:15:05.194275   65441 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-969068" ...
	I0804 00:15:01.996148   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.496756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.996430   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.496646   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.996707   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.496772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.995997   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.496651   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.996384   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.496403   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.850063   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.850518   65087 main.go:141] libmachine: (no-preload-118016) Found IP for machine: 192.168.61.137
	I0804 00:15:03.850544   65087 main.go:141] libmachine: (no-preload-118016) Reserving static IP address...
	I0804 00:15:03.850559   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has current primary IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.850970   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "no-preload-118016", mac: "52:54:00:be:41:20", ip: "192.168.61.137"} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.851001   65087 main.go:141] libmachine: (no-preload-118016) DBG | skip adding static IP to network mk-no-preload-118016 - found existing host DHCP lease matching {name: "no-preload-118016", mac: "52:54:00:be:41:20", ip: "192.168.61.137"}
	I0804 00:15:03.851015   65087 main.go:141] libmachine: (no-preload-118016) Reserved static IP address: 192.168.61.137
	I0804 00:15:03.851030   65087 main.go:141] libmachine: (no-preload-118016) Waiting for SSH to be available...
	I0804 00:15:03.851048   65087 main.go:141] libmachine: (no-preload-118016) DBG | Getting to WaitForSSH function...
	I0804 00:15:03.853316   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.853676   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.853705   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.853819   65087 main.go:141] libmachine: (no-preload-118016) DBG | Using SSH client type: external
	I0804 00:15:03.853850   65087 main.go:141] libmachine: (no-preload-118016) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa (-rw-------)
	I0804 00:15:03.853886   65087 main.go:141] libmachine: (no-preload-118016) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:03.853901   65087 main.go:141] libmachine: (no-preload-118016) DBG | About to run SSH command:
	I0804 00:15:03.853913   65087 main.go:141] libmachine: (no-preload-118016) DBG | exit 0
	I0804 00:15:03.981414   65087 main.go:141] libmachine: (no-preload-118016) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:03.981807   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetConfigRaw
	I0804 00:15:03.982419   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:03.985062   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.985400   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.985433   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.985674   65087 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/config.json ...
	I0804 00:15:03.985857   65087 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:03.985873   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:03.986090   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:03.988490   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.988798   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.988826   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.989017   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:03.989183   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:03.989342   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:03.989510   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:03.989697   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:03.989916   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:03.989927   65087 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:04.106042   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:04.106090   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.106372   65087 buildroot.go:166] provisioning hostname "no-preload-118016"
	I0804 00:15:04.106398   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.106594   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.109434   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.109777   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.109803   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.109919   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.110092   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.110248   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.110423   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.110582   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.110749   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.110764   65087 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-118016 && echo "no-preload-118016" | sudo tee /etc/hostname
	I0804 00:15:04.239856   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-118016
	
	I0804 00:15:04.239884   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.242877   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.243241   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.243271   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.243486   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.243712   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.243897   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.244046   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.244232   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.244420   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.244443   65087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-118016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-118016/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-118016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:04.367259   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:04.367289   65087 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:04.367330   65087 buildroot.go:174] setting up certificates
	I0804 00:15:04.367340   65087 provision.go:84] configureAuth start
	I0804 00:15:04.367432   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.367848   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:04.370330   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.370630   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.370658   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.370744   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.372799   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.373175   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.373203   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.373308   65087 provision.go:143] copyHostCerts
	I0804 00:15:04.373386   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:04.373399   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:04.373458   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:04.373557   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:04.373565   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:04.373585   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:04.373651   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:04.373657   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:04.373675   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:04.373732   65087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.no-preload-118016 san=[127.0.0.1 192.168.61.137 localhost minikube no-preload-118016]
	I0804 00:15:04.467261   65087 provision.go:177] copyRemoteCerts
	I0804 00:15:04.467322   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:04.467347   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.469843   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.470126   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.470154   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.470297   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.470478   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.470644   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.470761   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:04.559980   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:04.585701   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 00:15:04.610270   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:15:04.633954   65087 provision.go:87] duration metric: took 266.53536ms to configureAuth
	I0804 00:15:04.633981   65087 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:04.634154   65087 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:15:04.634219   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.636880   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.637243   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.637271   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.637452   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.637664   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.637823   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.637921   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.638060   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.638234   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.638250   65087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:04.916045   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:04.916077   65087 machine.go:97] duration metric: took 930.20802ms to provisionDockerMachine
	I0804 00:15:04.916088   65087 start.go:293] postStartSetup for "no-preload-118016" (driver="kvm2")
	I0804 00:15:04.916100   65087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:04.916113   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:04.916429   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:04.916453   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.919155   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.919485   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.919514   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.919657   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.919859   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.920026   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.920166   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.012754   65087 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:05.017004   65087 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:05.017024   65087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:05.017091   65087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:05.017180   65087 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:05.017293   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:05.026980   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:05.051265   65087 start.go:296] duration metric: took 135.164451ms for postStartSetup
	I0804 00:15:05.051309   65087 fix.go:56] duration metric: took 18.608839754s for fixHost
	I0804 00:15:05.051331   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.054286   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.054683   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.054710   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.054876   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.055127   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.055321   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.055485   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.055668   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:05.055870   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:05.055882   65087 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:05.170285   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730505.141206116
	
	I0804 00:15:05.170314   65087 fix.go:216] guest clock: 1722730505.141206116
	I0804 00:15:05.170321   65087 fix.go:229] Guest: 2024-08-04 00:15:05.141206116 +0000 UTC Remote: 2024-08-04 00:15:05.051313292 +0000 UTC m=+243.154971169 (delta=89.892824ms)
	I0804 00:15:05.170341   65087 fix.go:200] guest clock delta is within tolerance: 89.892824ms
	I0804 00:15:05.170359   65087 start.go:83] releasing machines lock for "no-preload-118016", held for 18.727925423s
	I0804 00:15:05.170392   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.170673   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:05.173694   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.174084   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.174117   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.174265   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.174828   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.175015   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.175103   65087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:05.175145   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.175263   65087 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:05.175286   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.177906   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178280   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.178307   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178329   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178470   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.178688   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.178777   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.178832   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178854   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.178945   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.179025   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.179111   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.179265   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.179417   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.282397   65087 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:05.288682   65087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:05.434388   65087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:05.440857   65087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:05.440937   65087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:05.461853   65087 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:05.461879   65087 start.go:495] detecting cgroup driver to use...
	I0804 00:15:05.461944   65087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:05.478397   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:05.494093   65087 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:05.494151   65087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:05.509391   65087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:05.524127   65087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:05.640185   65087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:05.784994   65087 docker.go:233] disabling docker service ...
	I0804 00:15:05.785071   65087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:05.802802   65087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:05.818424   65087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:05.970147   65087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:06.099759   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:06.114434   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:06.132989   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:06.433914   65087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0804 00:15:06.433969   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.452155   65087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:06.452245   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.464730   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.475848   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.488341   65087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:06.501984   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.514776   65087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.534773   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.547076   65087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:06.558639   65087 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:06.558695   65087 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:06.572920   65087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:06.583298   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:06.705307   65087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:06.845776   65087 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:06.845840   65087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:06.851710   65087 start.go:563] Will wait 60s for crictl version
	I0804 00:15:06.851764   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:06.855899   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:06.904392   65087 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:06.904493   65087 ssh_runner.go:195] Run: crio --version
	I0804 00:15:06.932866   65087 ssh_runner.go:195] Run: crio --version
	I0804 00:15:06.963071   65087 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0804 00:15:05.195984   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Start
	I0804 00:15:05.196175   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring networks are active...
	I0804 00:15:05.196904   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring network default is active
	I0804 00:15:05.197256   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring network mk-default-k8s-diff-port-969068 is active
	I0804 00:15:05.197709   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Getting domain xml...
	I0804 00:15:05.198474   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Creating domain...
	I0804 00:15:06.489009   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting to get IP...
	I0804 00:15:06.490137   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.490569   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.490641   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:06.490549   66290 retry.go:31] will retry after 298.701839ms: waiting for machine to come up
	I0804 00:15:06.791467   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.791938   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.791960   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:06.791894   66290 retry.go:31] will retry after 373.395742ms: waiting for machine to come up
	I0804 00:15:07.166622   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.167108   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.167139   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:07.167048   66290 retry.go:31] will retry after 404.799649ms: waiting for machine to come up
	I0804 00:15:06.995779   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.495822   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.995970   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.495870   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.996379   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.495852   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.495912   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.996591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:11.495964   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.964314   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:06.967088   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:06.967517   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:06.967547   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:06.967787   65087 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:06.973133   65087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:06.990153   65087 kubeadm.go:883] updating cluster {Name:no-preload-118016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:06.990339   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.297536   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.591746   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.874720   65087 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0804 00:15:07.874798   65087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:07.914104   65087 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0804 00:15:07.914127   65087 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:15:07.914172   65087 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:07.914212   65087 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:07.914237   65087 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0804 00:15:07.914253   65087 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:07.914324   65087 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:07.914374   65087 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:07.914225   65087 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:07.914374   65087 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:07.915814   65087 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:07.915833   65087 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:07.915838   65087 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:07.915816   65087 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:07.915814   65087 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0804 00:15:07.915882   65087 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:07.915962   65087 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:07.916150   65087 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.048225   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.050828   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.051873   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.056880   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.087643   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.091720   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0804 00:15:08.116485   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.173591   65087 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0804 00:15:08.173642   65087 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.173686   65087 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0804 00:15:08.173704   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.173725   65087 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.173777   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.191254   65087 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0804 00:15:08.191298   65087 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.191352   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.195238   65087 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0804 00:15:08.195290   65087 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.195340   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.246005   65087 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0804 00:15:08.246048   65087 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.246100   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.336855   65087 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0804 00:15:08.336936   65087 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.336945   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.336965   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.337078   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.337120   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.337161   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.337207   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.425270   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.425297   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.425296   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:08.425455   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.425522   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:08.458378   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0804 00:15:08.458520   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:08.460719   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:08.460827   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:08.460889   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0804 00:15:08.460983   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:08.492690   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0804 00:15:08.492789   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492808   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.492839   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:08.492852   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.492863   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492932   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492976   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0804 00:15:08.493036   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0804 00:15:08.763401   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:11.063302   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.570424927s)
	I0804 00:15:11.063326   65087 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0: (2.570469177s)
	I0804 00:15:11.063341   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0804 00:15:11.063348   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0804 00:15:11.063355   65087 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:11.063377   65087 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.299939136s)
	I0804 00:15:11.063414   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:11.063438   65087 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0804 00:15:11.063468   65087 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:11.063516   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:07.573639   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.574103   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.574150   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:07.574068   66290 retry.go:31] will retry after 552.033422ms: waiting for machine to come up
	I0804 00:15:08.127755   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.128317   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.128345   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:08.128254   66290 retry.go:31] will retry after 601.661676ms: waiting for machine to come up
	I0804 00:15:08.731160   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.731571   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.731596   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:08.731526   66290 retry.go:31] will retry after 899.954536ms: waiting for machine to come up
	I0804 00:15:09.632769   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:09.633217   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:09.633275   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:09.633188   66290 retry.go:31] will retry after 1.096119877s: waiting for machine to come up
	I0804 00:15:10.731586   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:10.732092   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:10.732116   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:10.732062   66290 retry.go:31] will retry after 1.09033143s: waiting for machine to come up
	I0804 00:15:11.824287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:11.824697   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:11.824723   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:11.824648   66290 retry.go:31] will retry after 1.458040473s: waiting for machine to come up
	I0804 00:15:11.996494   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.496005   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.996429   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.496310   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.996525   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.495995   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.996172   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.495809   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.996016   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:16.496210   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.840723   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.777281435s)
	I0804 00:15:14.840759   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0804 00:15:14.840758   65087 ssh_runner.go:235] Completed: which crictl: (3.777229082s)
	I0804 00:15:14.840769   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:14.840815   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:14.840815   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:14.894482   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0804 00:15:14.894607   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:16.729218   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (1.888374505s)
	I0804 00:15:16.729270   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0804 00:15:16.729277   65087 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.834630766s)
	I0804 00:15:16.729304   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:16.729312   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0804 00:15:16.729368   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:13.284961   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:13.285403   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:13.285435   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:13.285332   66290 retry.go:31] will retry after 2.307816709s: waiting for machine to come up
	I0804 00:15:15.594435   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:15.594855   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:15.594885   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:15.594804   66290 retry.go:31] will retry after 2.83542957s: waiting for machine to come up
	I0804 00:15:16.996765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.496069   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.995828   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.495847   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.996276   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.496155   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.996708   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.996145   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:21.496193   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.031187   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.301792704s)
	I0804 00:15:19.031309   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0804 00:15:19.031343   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:19.031389   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:20.493093   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.461677557s)
	I0804 00:15:20.493134   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0804 00:15:20.493152   65087 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:20.493202   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:18.433690   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:18.434156   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:18.434188   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:18.434105   66290 retry.go:31] will retry after 2.563856777s: waiting for machine to come up
	I0804 00:15:20.999804   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:21.000275   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:21.000307   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:21.000236   66290 retry.go:31] will retry after 3.783170851s: waiting for machine to come up
	I0804 00:15:26.095635   64502 start.go:364] duration metric: took 52.776761645s to acquireMachinesLock for "embed-certs-877598"
	I0804 00:15:26.095695   64502 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:15:26.095703   64502 fix.go:54] fixHost starting: 
	I0804 00:15:26.096104   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:26.096143   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:26.113770   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0804 00:15:26.114303   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:26.114742   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:15:26.114768   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:26.115137   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:26.115330   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:26.115508   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:15:26.117156   64502 fix.go:112] recreateIfNeeded on embed-certs-877598: state=Stopped err=<nil>
	I0804 00:15:26.117179   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	W0804 00:15:26.117343   64502 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:15:26.119743   64502 out.go:177] * Restarting existing kvm2 VM for "embed-certs-877598" ...
	I0804 00:15:21.996520   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.495922   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.995766   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.495923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.995770   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.496788   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.996759   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.996017   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.496445   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.363529   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.870304087s)
	I0804 00:15:22.363559   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0804 00:15:22.363573   65087 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:22.363618   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:23.009879   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0804 00:15:23.009924   65087 cache_images.go:123] Successfully loaded all cached images
	I0804 00:15:23.009932   65087 cache_images.go:92] duration metric: took 15.095790334s to LoadCachedImages
	I0804 00:15:23.009946   65087 kubeadm.go:934] updating node { 192.168.61.137 8443 v1.31.0-rc.0 crio true true} ...
	I0804 00:15:23.010145   65087 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-118016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:23.010230   65087 ssh_runner.go:195] Run: crio config
	I0804 00:15:23.057968   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:15:23.057991   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:23.058002   65087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:23.058022   65087 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.137 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-118016 NodeName:no-preload-118016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:23.058149   65087 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-118016"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:23.058210   65087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0804 00:15:23.068635   65087 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:23.068713   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:23.077867   65087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0804 00:15:23.094220   65087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0804 00:15:23.110798   65087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0804 00:15:23.132230   65087 ssh_runner.go:195] Run: grep 192.168.61.137	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:23.136622   65087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:23.149229   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:23.284623   65087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:23.309115   65087 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016 for IP: 192.168.61.137
	I0804 00:15:23.309212   65087 certs.go:194] generating shared ca certs ...
	I0804 00:15:23.309242   65087 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:23.309451   65087 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:23.309509   65087 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:23.309525   65087 certs.go:256] generating profile certs ...
	I0804 00:15:23.309633   65087 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.key
	I0804 00:15:23.309718   65087 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.key.794a08a1
	I0804 00:15:23.309775   65087 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.key
	I0804 00:15:23.309951   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:23.309992   65087 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:23.310006   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:23.310050   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:23.310084   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:23.310125   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:23.310186   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:23.310811   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:23.346479   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:23.390508   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:23.419626   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:23.453891   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 00:15:23.481597   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:23.507749   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:23.537567   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:15:23.565469   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:23.590844   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:23.618748   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:23.645921   65087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:23.664034   65087 ssh_runner.go:195] Run: openssl version
	I0804 00:15:23.670083   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:23.681080   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.685717   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.685777   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.691573   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:23.702260   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:23.713185   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.717747   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.717803   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.723598   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:23.734445   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:23.745394   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.750239   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.750312   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.756471   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:23.767795   65087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:23.772483   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:23.778613   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:23.784560   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:23.790455   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:23.796260   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:23.802405   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:23.808623   65087 kubeadm.go:392] StartCluster: {Name:no-preload-118016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:23.808710   65087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:23.808753   65087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:23.857908   65087 cri.go:89] found id: ""
	I0804 00:15:23.857983   65087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:23.868694   65087 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:23.868717   65087 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:23.868789   65087 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:23.878826   65087 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:23.879879   65087 kubeconfig.go:125] found "no-preload-118016" server: "https://192.168.61.137:8443"
	I0804 00:15:23.882653   65087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:23.893441   65087 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.137
	I0804 00:15:23.893475   65087 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:23.893489   65087 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:23.893533   65087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:23.933954   65087 cri.go:89] found id: ""
	I0804 00:15:23.934026   65087 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:23.951080   65087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:23.962250   65087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:23.962274   65087 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:23.962327   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:15:23.971760   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:23.971817   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:23.981767   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:15:23.991443   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:23.991494   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:24.001911   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:15:24.011927   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:24.011988   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:24.022349   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:15:24.032305   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:24.032371   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:24.042416   65087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:24.052403   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:24.163413   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.106900   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.323496   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.410928   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.569137   65087 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:25.569221   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.069288   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.570343   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.615965   65087 api_server.go:72] duration metric: took 1.046825245s to wait for apiserver process to appear ...
	I0804 00:15:26.615997   65087 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:26.616022   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:26.616618   65087 api_server.go:269] stopped: https://192.168.61.137:8443/healthz: Get "https://192.168.61.137:8443/healthz": dial tcp 192.168.61.137:8443: connect: connection refused
	I0804 00:15:24.788329   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.788775   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Found IP for machine: 192.168.39.132
	I0804 00:15:24.788799   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has current primary IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.788811   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Reserving static IP address...
	I0804 00:15:24.789238   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-969068", mac: "52:54:00:60:ac:10", ip: "192.168.39.132"} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.789266   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | skip adding static IP to network mk-default-k8s-diff-port-969068 - found existing host DHCP lease matching {name: "default-k8s-diff-port-969068", mac: "52:54:00:60:ac:10", ip: "192.168.39.132"}
	I0804 00:15:24.789287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Reserved static IP address: 192.168.39.132
	I0804 00:15:24.789303   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for SSH to be available...
	I0804 00:15:24.789333   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Getting to WaitForSSH function...
	I0804 00:15:24.791371   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.791734   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.791762   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.791904   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Using SSH client type: external
	I0804 00:15:24.791934   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa (-rw-------)
	I0804 00:15:24.791975   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:24.791994   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | About to run SSH command:
	I0804 00:15:24.792010   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | exit 0
	I0804 00:15:24.921420   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:24.921795   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetConfigRaw
	I0804 00:15:24.922375   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:24.925074   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.925403   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.925431   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.925680   65441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:15:24.925904   65441 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:24.925924   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:24.926120   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:24.928597   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.929006   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.929045   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.929171   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:24.929334   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:24.929498   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:24.929634   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:24.929814   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:24.930001   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:24.930012   65441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:25.046325   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:25.046355   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.046703   65441 buildroot.go:166] provisioning hostname "default-k8s-diff-port-969068"
	I0804 00:15:25.046733   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.046940   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.049807   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.050383   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.050427   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.050547   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.050739   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.050937   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.051131   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.051296   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.051504   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.051525   65441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-969068 && echo "default-k8s-diff-port-969068" | sudo tee /etc/hostname
	I0804 00:15:25.182512   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-969068
	
	I0804 00:15:25.182552   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.185673   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.186019   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.186051   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.186241   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.186425   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.186551   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.186660   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.186853   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.187034   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.187051   65441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-969068' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-969068/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-969068' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:25.313435   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:25.313470   65441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:25.313518   65441 buildroot.go:174] setting up certificates
	I0804 00:15:25.313531   65441 provision.go:84] configureAuth start
	I0804 00:15:25.313544   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.313856   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:25.316883   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.317233   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.317287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.317475   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.319773   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.320180   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.320214   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.320404   65441 provision.go:143] copyHostCerts
	I0804 00:15:25.320459   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:25.320467   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:25.320531   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:25.320666   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:25.320675   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:25.320702   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:25.320769   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:25.320777   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:25.320804   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:25.320871   65441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-969068 san=[127.0.0.1 192.168.39.132 default-k8s-diff-port-969068 localhost minikube]
	I0804 00:15:25.374535   65441 provision.go:177] copyRemoteCerts
	I0804 00:15:25.374590   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:25.374613   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.377629   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.378047   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.378073   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.378254   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.378478   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.378672   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.378897   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:25.469632   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:25.495826   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0804 00:15:25.527006   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:15:25.557603   65441 provision.go:87] duration metric: took 244.055462ms to configureAuth
	I0804 00:15:25.557637   65441 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:25.557873   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:25.557982   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.560974   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.561339   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.561389   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.561570   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.561740   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.561881   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.562043   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.562248   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.562456   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.562471   65441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:25.835452   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:25.835480   65441 machine.go:97] duration metric: took 909.563441ms to provisionDockerMachine
	I0804 00:15:25.835496   65441 start.go:293] postStartSetup for "default-k8s-diff-port-969068" (driver="kvm2")
	I0804 00:15:25.835512   65441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:25.835541   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:25.835846   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:25.835873   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.838713   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.839124   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.839151   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.839287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.839465   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.839634   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.839779   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:25.928376   65441 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:25.932472   65441 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:25.932498   65441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:25.932608   65441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:25.932775   65441 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:25.932951   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:25.943100   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:25.969517   65441 start.go:296] duration metric: took 134.003956ms for postStartSetup
	I0804 00:15:25.969567   65441 fix.go:56] duration metric: took 20.799045329s for fixHost
	I0804 00:15:25.969591   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.972743   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.973172   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.973204   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.973342   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.973596   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.973768   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.973944   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.974158   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.974330   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.974343   65441 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:26.095438   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730526.053053982
	
	I0804 00:15:26.095462   65441 fix.go:216] guest clock: 1722730526.053053982
	I0804 00:15:26.095472   65441 fix.go:229] Guest: 2024-08-04 00:15:26.053053982 +0000 UTC Remote: 2024-08-04 00:15:25.969572309 +0000 UTC m=+213.641216658 (delta=83.481673ms)
	I0804 00:15:26.095524   65441 fix.go:200] guest clock delta is within tolerance: 83.481673ms
	I0804 00:15:26.095534   65441 start.go:83] releasing machines lock for "default-k8s-diff-port-969068", held for 20.925048627s
	I0804 00:15:26.095570   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.095862   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:26.098718   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.099112   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.099145   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.099305   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.099929   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.100108   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.100182   65441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:26.100222   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:26.100347   65441 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:26.100388   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:26.103393   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.103720   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.103942   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.103963   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.104142   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:26.104159   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.104243   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.104347   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:26.104384   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:26.104499   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:26.104545   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:26.104718   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:26.104728   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:26.104881   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:26.214704   65441 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:26.221287   65441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:26.378021   65441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:26.385673   65441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:26.385751   65441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:26.403073   65441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:26.403104   65441 start.go:495] detecting cgroup driver to use...
	I0804 00:15:26.403193   65441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:26.421108   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:26.435556   65441 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:26.435627   65441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:26.455219   65441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:26.477841   65441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:26.626980   65441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:26.806808   65441 docker.go:233] disabling docker service ...
	I0804 00:15:26.806887   65441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:26.824079   65441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:26.839225   65441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:26.967375   65441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:27.136156   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:27.151822   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:27.173326   65441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:15:27.173404   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.184431   65441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:27.184509   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.194890   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.208349   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.222326   65441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:27.237212   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.249571   65441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.274913   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.288929   65441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:27.305789   65441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:27.305863   65441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:27.321708   65441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:27.332129   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:27.482279   65441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:27.638388   65441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:27.638465   65441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:27.644607   65441 start.go:563] Will wait 60s for crictl version
	I0804 00:15:27.644665   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:15:27.648663   65441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:27.691731   65441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:27.691824   65441 ssh_runner.go:195] Run: crio --version
	I0804 00:15:27.731365   65441 ssh_runner.go:195] Run: crio --version
	I0804 00:15:27.767416   65441 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:15:26.121074   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Start
	I0804 00:15:26.121263   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring networks are active...
	I0804 00:15:26.122075   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring network default is active
	I0804 00:15:26.122471   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring network mk-embed-certs-877598 is active
	I0804 00:15:26.122884   64502 main.go:141] libmachine: (embed-certs-877598) Getting domain xml...
	I0804 00:15:26.123684   64502 main.go:141] libmachine: (embed-certs-877598) Creating domain...
	I0804 00:15:27.536026   64502 main.go:141] libmachine: (embed-certs-877598) Waiting to get IP...
	I0804 00:15:27.537165   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:27.537650   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:27.537734   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:27.537654   66522 retry.go:31] will retry after 277.473157ms: waiting for machine to come up
	I0804 00:15:27.817330   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:27.817824   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:27.817858   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:27.817788   66522 retry.go:31] will retry after 322.160841ms: waiting for machine to come up
	I0804 00:15:28.141287   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.141818   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.141855   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.141775   66522 retry.go:31] will retry after 325.833359ms: waiting for machine to come up
	I0804 00:15:28.469440   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.469976   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.470015   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.469933   66522 retry.go:31] will retry after 372.304971ms: waiting for machine to come up
	I0804 00:15:28.843604   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.844376   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.844400   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.844297   66522 retry.go:31] will retry after 607.361674ms: waiting for machine to come up
	I0804 00:15:29.453082   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:29.453557   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:29.453586   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:29.453527   66522 retry.go:31] will retry after 615.002468ms: waiting for machine to come up
	I0804 00:15:30.070598   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:30.071112   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:30.071134   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:30.071079   66522 retry.go:31] will retry after 834.292107ms: waiting for machine to come up
	I0804 00:15:27.116719   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.030589   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:30.030625   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:30.030641   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.091459   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:30.091494   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:30.116633   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.149335   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:30.149394   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:30.617009   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.622086   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:30.622117   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:31.116320   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:31.125065   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:31.125143   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:31.617091   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:31.627142   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0804 00:15:31.636371   65087 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:15:31.636405   65087 api_server.go:131] duration metric: took 5.020400356s to wait for apiserver health ...
	I0804 00:15:31.636414   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:15:31.636420   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:31.638145   65087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:26.996399   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.496810   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.995825   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.496395   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.996561   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.496735   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.996542   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.496406   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.996259   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.496307   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.639553   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:15:31.658269   65087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:15:31.685188   65087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:15:31.703581   65087 system_pods.go:59] 8 kube-system pods found
	I0804 00:15:31.703627   65087 system_pods.go:61] "coredns-6f6b679f8f-9vdxc" [fd645695-cc1d-4394-96b0-832f48e9cf26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:15:31.703638   65087 system_pods.go:61] "etcd-no-preload-118016" [a329ecd7-7574-48f4-a776-7b7c05465f8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:15:31.703649   65087 system_pods.go:61] "kube-apiserver-no-preload-118016" [43d313aa-1844-488d-8925-b744f504323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:15:31.703661   65087 system_pods.go:61] "kube-controller-manager-no-preload-118016" [d56a5461-29d3-47f7-95df-a7fc6b52ca2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:15:31.703669   65087 system_pods.go:61] "kube-proxy-8bcg7" [c2b43118-5216-41bf-9f16-00f11ca1eab5] Running
	I0804 00:15:31.703678   65087 system_pods.go:61] "kube-scheduler-no-preload-118016" [53dc528c-2f00-4ca6-86c6-d02f4533229d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:15:31.703687   65087 system_pods.go:61] "metrics-server-6867b74b74-5xfgz" [c558b60d-3816-406a-addb-96cd42266bd1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:15:31.703698   65087 system_pods.go:61] "storage-provisioner" [1edb442e-272f-4ef7-b3fb-7c46b915c61a] Running
	I0804 00:15:31.703707   65087 system_pods.go:74] duration metric: took 18.49198ms to wait for pod list to return data ...
	I0804 00:15:31.703721   65087 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:15:31.712702   65087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:15:31.712735   65087 node_conditions.go:123] node cpu capacity is 2
	I0804 00:15:31.712748   65087 node_conditions.go:105] duration metric: took 9.019815ms to run NodePressure ...
	I0804 00:15:31.712773   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:27.768972   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:27.772437   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:27.772860   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:27.772903   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:27.773135   65441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:27.777834   65441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:27.792279   65441 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:27.792437   65441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:15:27.792493   65441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:27.833330   65441 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:15:27.833453   65441 ssh_runner.go:195] Run: which lz4
	I0804 00:15:27.837836   65441 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:15:27.842093   65441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:15:27.842128   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:15:29.410529   65441 crio.go:462] duration metric: took 1.572735301s to copy over tarball
	I0804 00:15:29.410610   65441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:15:32.062492   65441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.651848511s)
	I0804 00:15:32.062533   65441 crio.go:469] duration metric: took 2.651972207s to extract the tarball
	I0804 00:15:32.062545   65441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:15:32.100003   65441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:32.144166   65441 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:15:32.144192   65441 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:15:32.144201   65441 kubeadm.go:934] updating node { 192.168.39.132 8444 v1.30.3 crio true true} ...
	I0804 00:15:32.144327   65441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-969068 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:32.144434   65441 ssh_runner.go:195] Run: crio config
	I0804 00:15:32.197593   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:15:32.197618   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:32.197630   65441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:32.197658   65441 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-969068 NodeName:default-k8s-diff-port-969068 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:32.197862   65441 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.132
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-969068"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:32.197937   65441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:15:32.208469   65441 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:32.208551   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:32.218194   65441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0804 00:15:32.237731   65441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:15:32.259599   65441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0804 00:15:32.281113   65441 ssh_runner.go:195] Run: grep 192.168.39.132	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:32.285559   65441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:32.298722   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:30.906612   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:30.907056   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:30.907086   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:30.907012   66522 retry.go:31] will retry after 1.489076061s: waiting for machine to come up
	I0804 00:15:32.397239   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:32.397614   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:32.397642   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:32.397568   66522 retry.go:31] will retry after 1.737097329s: waiting for machine to come up
	I0804 00:15:34.135859   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:34.136363   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:34.136393   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:34.136321   66522 retry.go:31] will retry after 2.154712298s: waiting for machine to come up
	I0804 00:15:31.996780   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.496164   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.996444   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.496838   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.996533   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.496300   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.996772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.495937   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.996834   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.496277   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.982926   65087 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:15:31.989888   65087 kubeadm.go:739] kubelet initialised
	I0804 00:15:31.989926   65087 kubeadm.go:740] duration metric: took 6.968445ms waiting for restarted kubelet to initialise ...
	I0804 00:15:31.989938   65087 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:31.997210   65087 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:34.748142   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:32.432400   65441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:32.450525   65441 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068 for IP: 192.168.39.132
	I0804 00:15:32.450548   65441 certs.go:194] generating shared ca certs ...
	I0804 00:15:32.450571   65441 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:32.450738   65441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:32.450801   65441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:32.450815   65441 certs.go:256] generating profile certs ...
	I0804 00:15:32.450922   65441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.key
	I0804 00:15:32.451000   65441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.key.a17bd5dd
	I0804 00:15:32.451053   65441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.key
	I0804 00:15:32.451199   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:32.451242   65441 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:32.451255   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:32.451279   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:32.451303   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:32.451326   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:32.451365   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:32.451910   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:32.505178   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:32.557546   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:32.596512   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:32.635476   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0804 00:15:32.687156   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:32.716537   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:32.746312   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:15:32.777788   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:32.806730   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:32.835822   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:32.864241   65441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:32.886754   65441 ssh_runner.go:195] Run: openssl version
	I0804 00:15:32.893177   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:32.904847   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.909871   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.909937   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.916357   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:32.927322   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:32.939447   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.944221   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.944275   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.950218   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:32.966506   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:32.981288   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.986761   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.986831   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.993077   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:33.007428   65441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:33.013290   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:33.019997   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:33.026423   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:33.033004   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:33.039205   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:33.045367   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:33.051462   65441 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:33.051546   65441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:33.051605   65441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:33.094354   65441 cri.go:89] found id: ""
	I0804 00:15:33.094433   65441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:33.105416   65441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:33.105439   65441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:33.105480   65441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:33.115838   65441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:33.117466   65441 kubeconfig.go:125] found "default-k8s-diff-port-969068" server: "https://192.168.39.132:8444"
	I0804 00:15:33.120806   65441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:33.130533   65441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.132
	I0804 00:15:33.130567   65441 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:33.130579   65441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:33.130628   65441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:33.178718   65441 cri.go:89] found id: ""
	I0804 00:15:33.178813   65441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:33.199000   65441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:33.212169   65441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:33.212188   65441 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:33.212255   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0804 00:15:33.225192   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:33.225254   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:33.239194   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0804 00:15:33.252402   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:33.252470   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:33.265198   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0804 00:15:33.276564   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:33.276636   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:33.288785   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0804 00:15:33.299848   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:33.299904   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:33.311115   65441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:33.322121   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:33.442578   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.526815   65441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084197731s)
	I0804 00:15:34.526857   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.803105   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.893681   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.978573   65441 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:34.978668   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.479179   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.979520   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.063056   65441 api_server.go:72] duration metric: took 1.084463955s to wait for apiserver process to appear ...
	I0804 00:15:36.063161   65441 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:36.063203   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:36.063755   65441 api_server.go:269] stopped: https://192.168.39.132:8444/healthz: Get "https://192.168.39.132:8444/healthz": dial tcp 192.168.39.132:8444: connect: connection refused
	I0804 00:15:36.563501   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:36.293051   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:36.293675   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:36.293710   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:36.293604   66522 retry.go:31] will retry after 2.826050203s: waiting for machine to come up
	I0804 00:15:39.120961   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:39.121602   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:39.121628   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:39.121554   66522 retry.go:31] will retry after 2.710829438s: waiting for machine to come up
	I0804 00:15:36.996761   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.495885   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.995785   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.496550   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.996645   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.995851   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.496685   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.995896   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:41.495864   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.005216   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:39.505397   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:39.405829   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:39.405895   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:39.405913   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:39.433026   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:39.433063   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:39.563242   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:39.568554   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:39.568591   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:40.064078   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:40.085940   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:40.085978   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:40.564041   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:40.569785   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:40.569812   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:41.063334   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:41.068113   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:41.068135   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:41.563691   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:41.569214   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:41.569248   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:42.063737   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:42.068227   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:42.068260   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:42.563309   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:42.567740   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:42.567775   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:43.063306   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:43.067611   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 200:
	ok
	I0804 00:15:43.073842   65441 api_server.go:141] control plane version: v1.30.3
	I0804 00:15:43.073868   65441 api_server.go:131] duration metric: took 7.010684682s to wait for apiserver health ...
	I0804 00:15:43.073879   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:15:43.073887   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:43.075779   65441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:43.077123   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:15:43.088611   65441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:15:43.109845   65441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:15:43.119204   65441 system_pods.go:59] 8 kube-system pods found
	I0804 00:15:43.119235   65441 system_pods.go:61] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:15:43.119246   65441 system_pods.go:61] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:15:43.119259   65441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:15:43.119269   65441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:15:43.119275   65441 system_pods.go:61] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:15:43.119282   65441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:15:43.119300   65441 system_pods.go:61] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:15:43.119309   65441 system_pods.go:61] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:15:43.119317   65441 system_pods.go:74] duration metric: took 9.453775ms to wait for pod list to return data ...
	I0804 00:15:43.119328   65441 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:15:43.122493   65441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:15:43.122516   65441 node_conditions.go:123] node cpu capacity is 2
	I0804 00:15:43.122528   65441 node_conditions.go:105] duration metric: took 3.191087ms to run NodePressure ...
	I0804 00:15:43.122547   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:43.391258   65441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:15:43.395252   65441 kubeadm.go:739] kubelet initialised
	I0804 00:15:43.395274   65441 kubeadm.go:740] duration metric: took 3.992079ms waiting for restarted kubelet to initialise ...
	I0804 00:15:43.395282   65441 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:43.400173   65441 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.404618   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.404645   65441 pod_ready.go:81] duration metric: took 4.449232ms for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.404665   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.404675   65441 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.409134   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.409165   65441 pod_ready.go:81] duration metric: took 4.471898ms for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.409178   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.409190   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.414342   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.414362   65441 pod_ready.go:81] duration metric: took 5.160435ms for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.414374   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.414383   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.513956   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.513987   65441 pod_ready.go:81] duration metric: took 99.59507ms for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.514003   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.514033   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.913592   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-proxy-zz7fr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.913619   65441 pod_ready.go:81] duration metric: took 399.572927ms for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.913628   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-proxy-zz7fr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.913634   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.313833   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.313864   65441 pod_ready.go:81] duration metric: took 400.220214ms for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:44.313878   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.313886   65441 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.713583   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.713616   65441 pod_ready.go:81] duration metric: took 399.716432ms for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:44.713636   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.713647   65441 pod_ready.go:38] duration metric: took 1.318356042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:44.713666   65441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:15:44.725908   65441 ops.go:34] apiserver oom_adj: -16
	I0804 00:15:44.725935   65441 kubeadm.go:597] duration metric: took 11.620489409s to restartPrimaryControlPlane
	I0804 00:15:44.725947   65441 kubeadm.go:394] duration metric: took 11.674491721s to StartCluster
	I0804 00:15:44.725966   65441 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:44.726046   65441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:15:44.728392   65441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:44.728702   65441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:15:44.728805   65441 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:15:44.728895   65441 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.728942   65441 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.728954   65441 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:15:44.728958   65441 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.728990   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.728967   65441 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.729027   65441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-969068"
	I0804 00:15:44.729039   65441 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.729054   65441 addons.go:243] addon metrics-server should already be in state true
	I0804 00:15:44.729143   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.729436   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729470   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.729515   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729564   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729598   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.729642   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.728909   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:44.730486   65441 out.go:177] * Verifying Kubernetes components...
	I0804 00:15:44.731972   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:44.748737   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0804 00:15:44.749200   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I0804 00:15:44.749311   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I0804 00:15:44.749582   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.749691   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.749858   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.750128   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750144   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750153   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750171   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750326   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750347   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750609   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.750617   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.750810   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.751212   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.751249   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.751286   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.751733   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.751780   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.754574   65441 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.754616   65441 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:15:44.754649   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.755038   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.755080   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.769763   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0804 00:15:44.770311   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.770828   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.770850   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.771209   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.771371   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.771935   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43081
	I0804 00:15:44.773284   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.773416   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I0804 00:15:44.773646   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.773854   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.773866   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.773981   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.774227   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.774529   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.774551   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.774665   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.774711   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.774938   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.775078   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.776166   65441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:15:44.776690   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.777692   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:15:44.777708   65441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:15:44.777724   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.778473   65441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:41.833728   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:41.834246   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:41.834270   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:41.834210   66522 retry.go:31] will retry after 2.891635961s: waiting for machine to come up
	I0804 00:15:44.727424   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.727895   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has current primary IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.727919   64502 main.go:141] libmachine: (embed-certs-877598) Found IP for machine: 192.168.50.140
	I0804 00:15:44.727943   64502 main.go:141] libmachine: (embed-certs-877598) Reserving static IP address...
	I0804 00:15:44.728570   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "embed-certs-877598", mac: "52:54:00:86:aa:38", ip: "192.168.50.140"} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.728602   64502 main.go:141] libmachine: (embed-certs-877598) DBG | skip adding static IP to network mk-embed-certs-877598 - found existing host DHCP lease matching {name: "embed-certs-877598", mac: "52:54:00:86:aa:38", ip: "192.168.50.140"}
	I0804 00:15:44.728617   64502 main.go:141] libmachine: (embed-certs-877598) Reserved static IP address: 192.168.50.140
	I0804 00:15:44.728634   64502 main.go:141] libmachine: (embed-certs-877598) Waiting for SSH to be available...
	I0804 00:15:44.728648   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Getting to WaitForSSH function...
	I0804 00:15:44.731684   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.732102   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.732137   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.732388   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Using SSH client type: external
	I0804 00:15:44.732408   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa (-rw-------)
	I0804 00:15:44.732438   64502 main.go:141] libmachine: (embed-certs-877598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:44.732448   64502 main.go:141] libmachine: (embed-certs-877598) DBG | About to run SSH command:
	I0804 00:15:44.732462   64502 main.go:141] libmachine: (embed-certs-877598) DBG | exit 0
	I0804 00:15:44.873689   64502 main.go:141] libmachine: (embed-certs-877598) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:44.874033   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetConfigRaw
	I0804 00:15:44.874716   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:44.877406   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.877823   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.877855   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.878130   64502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/config.json ...
	I0804 00:15:44.878358   64502 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:44.878382   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:44.878563   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:44.880862   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.881215   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.881253   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.881427   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:44.881597   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:44.881785   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:44.881958   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:44.882150   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:44.882381   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:44.882399   64502 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:44.998143   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:44.998172   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:44.998534   64502 buildroot.go:166] provisioning hostname "embed-certs-877598"
	I0804 00:15:44.998564   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:44.998761   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.001998   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.002508   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.002545   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.002691   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.002847   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.003026   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.003175   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.003388   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.003592   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.003606   64502 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-877598 && echo "embed-certs-877598" | sudo tee /etc/hostname
	I0804 00:15:45.142065   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-877598
	
	I0804 00:15:45.142123   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.145427   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.145858   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.145912   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.146133   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.146279   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.146422   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.146595   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.146778   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.146991   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.147007   64502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-877598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-877598/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-877598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:45.275711   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:45.275748   64502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:45.275775   64502 buildroot.go:174] setting up certificates
	I0804 00:15:45.275790   64502 provision.go:84] configureAuth start
	I0804 00:15:45.275804   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:45.276145   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:45.279645   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.280141   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.280166   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.280298   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.283135   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.283495   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.283521   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.283693   64502 provision.go:143] copyHostCerts
	I0804 00:15:45.283754   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:45.283767   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:45.283837   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:45.283954   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:45.283975   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:45.284004   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:45.284168   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:45.284182   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:45.284214   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:45.284280   64502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.embed-certs-877598 san=[127.0.0.1 192.168.50.140 embed-certs-877598 localhost minikube]
	I0804 00:15:45.484805   64502 provision.go:177] copyRemoteCerts
	I0804 00:15:45.484861   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:45.484883   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.488177   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.488586   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.488621   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.488852   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.489032   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.489191   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.489340   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:45.580782   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:45.612118   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 00:15:45.638201   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:15:45.665741   64502 provision.go:87] duration metric: took 389.935703ms to configureAuth
	I0804 00:15:45.665778   64502 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:45.666008   64502 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:45.666110   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.668942   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.669312   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.669343   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.669589   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.669812   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.669995   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.670158   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.670317   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.670501   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.670522   64502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:44.779708   65441 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:15:44.779730   65441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:15:44.779747   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.780637   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.781098   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.781120   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.781219   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.781424   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.781593   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.781753   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.783024   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.783459   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.783479   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.783895   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.784054   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.784219   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.784343   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.793057   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
	I0804 00:15:44.793581   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.794075   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.794094   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.794413   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.794586   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.796274   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.796609   65441 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:15:44.796623   65441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:15:44.796643   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.799445   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.799990   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.800254   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.800698   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.800864   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.800974   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.801305   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.962413   65441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:44.983596   65441 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-969068" to be "Ready" ...
	I0804 00:15:45.057238   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:15:45.057261   65441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:15:45.082722   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:15:45.082745   65441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:15:45.088213   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:15:45.115230   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:15:45.115261   65441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:15:45.115325   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:15:45.164676   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:15:45.502008   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.502040   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.502381   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:45.502440   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.502463   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:45.502476   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.502484   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.502701   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.502718   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:45.510043   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.510065   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.510305   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:45.510353   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.510364   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.217233   65441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.101870491s)
	I0804 00:15:46.217295   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.217308   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.217585   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.217609   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.217625   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.217652   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.217719   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.218073   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.218091   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.218104   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.255756   65441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.091044347s)
	I0804 00:15:46.255802   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.255819   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.256053   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.256093   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.256101   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.256110   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.256117   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.256412   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.256446   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.256454   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.256465   65441 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-969068"
	I0804 00:15:46.258662   65441 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0804 00:15:41.995808   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.496612   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.996566   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.495812   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.996095   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.495902   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.996724   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.495854   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.996354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:46.496185   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.005235   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:44.003809   65087 pod_ready.go:92] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.003847   65087 pod_ready.go:81] duration metric: took 12.006609818s for pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.003861   65087 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.009518   65087 pod_ready.go:92] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.009541   65087 pod_ready.go:81] duration metric: took 5.671724ms for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.009554   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.014897   65087 pod_ready.go:92] pod "kube-apiserver-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.014923   65087 pod_ready.go:81] duration metric: took 5.360171ms for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.014938   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.521943   65087 pod_ready.go:92] pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.521968   65087 pod_ready.go:81] duration metric: took 1.507021563s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.521983   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bcg7" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.527550   65087 pod_ready.go:92] pod "kube-proxy-8bcg7" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.527575   65087 pod_ready.go:81] duration metric: took 5.585026ms for pod "kube-proxy-8bcg7" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.527588   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.604221   65087 pod_ready.go:92] pod "kube-scheduler-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.604245   65087 pod_ready.go:81] duration metric: took 76.648502ms for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.604260   65087 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:46.260578   65441 addons.go:510] duration metric: took 1.531768603s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0804 00:15:46.988351   65441 node_ready.go:53] node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:45.985471   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:45.985501   64502 machine.go:97] duration metric: took 1.107126695s to provisionDockerMachine
	I0804 00:15:45.985514   64502 start.go:293] postStartSetup for "embed-certs-877598" (driver="kvm2")
	I0804 00:15:45.985527   64502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:45.985554   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:45.985928   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:45.985962   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.989294   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.989699   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.989731   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.989875   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.990079   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.990230   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.990355   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.085684   64502 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:46.091660   64502 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:46.091690   64502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:46.091776   64502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:46.091873   64502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:46.092005   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:46.102373   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:46.129547   64502 start.go:296] duration metric: took 144.018823ms for postStartSetup
	I0804 00:15:46.129594   64502 fix.go:56] duration metric: took 20.033890858s for fixHost
	I0804 00:15:46.129619   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.132803   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.133154   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.133190   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.133347   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.133580   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.133766   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.134016   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.134242   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:46.134454   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:46.134471   64502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:46.250499   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730546.219077490
	
	I0804 00:15:46.250528   64502 fix.go:216] guest clock: 1722730546.219077490
	I0804 00:15:46.250539   64502 fix.go:229] Guest: 2024-08-04 00:15:46.21907749 +0000 UTC Remote: 2024-08-04 00:15:46.129599456 +0000 UTC m=+355.401502879 (delta=89.478034ms)
	I0804 00:15:46.250567   64502 fix.go:200] guest clock delta is within tolerance: 89.478034ms
	I0804 00:15:46.250575   64502 start.go:83] releasing machines lock for "embed-certs-877598", held for 20.15490553s
	I0804 00:15:46.250609   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.250902   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:46.253782   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.254164   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.254194   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.254376   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.254967   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.255169   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.255247   64502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:46.255307   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.255376   64502 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:46.255399   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.260113   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260481   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.260511   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260529   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260702   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.260870   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.260995   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.261022   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.261045   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.261182   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.261208   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.261305   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.261451   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.261588   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.372061   64502 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:46.378356   64502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:46.527705   64502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:46.534567   64502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:46.534620   64502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:46.550801   64502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:46.550829   64502 start.go:495] detecting cgroup driver to use...
	I0804 00:15:46.550916   64502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:46.568369   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:46.583437   64502 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:46.583496   64502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:46.599267   64502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:46.614874   64502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:46.734467   64502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:46.900868   64502 docker.go:233] disabling docker service ...
	I0804 00:15:46.900941   64502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:46.915612   64502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:46.929948   64502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:47.056637   64502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:47.175277   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:47.190167   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:47.211062   64502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:15:47.211115   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.222459   64502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:47.222547   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.232964   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.243663   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.254387   64502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:47.266424   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.277323   64502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.296078   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.307058   64502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:47.317138   64502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:47.317223   64502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:47.332104   64502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:47.342965   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:47.464208   64502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:47.620127   64502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:47.620196   64502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:47.625103   64502 start.go:563] Will wait 60s for crictl version
	I0804 00:15:47.625165   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:15:47.628942   64502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:47.668593   64502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:47.668686   64502 ssh_runner.go:195] Run: crio --version
	I0804 00:15:47.700313   64502 ssh_runner.go:195] Run: crio --version
	I0804 00:15:47.737281   64502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:15:47.738730   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:47.741698   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:47.742098   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:47.742144   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:47.742310   64502 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:47.746883   64502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:47.760111   64502 kubeadm.go:883] updating cluster {Name:embed-certs-877598 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:47.760247   64502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:15:47.760305   64502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:47.801700   64502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:15:47.801766   64502 ssh_runner.go:195] Run: which lz4
	I0804 00:15:47.806337   64502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:15:47.811010   64502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:15:47.811050   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:15:49.359157   64502 crio.go:462] duration metric: took 1.552864688s to copy over tarball
	I0804 00:15:49.359245   64502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:15:46.996215   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.496634   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.996278   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.496184   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.996616   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.496240   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.996433   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.996600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:51.496459   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.611474   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:49.611879   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:51.616732   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:48.988818   65441 node_ready.go:53] node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:49.988196   65441 node_ready.go:49] node "default-k8s-diff-port-969068" has status "Ready":"True"
	I0804 00:15:49.988220   65441 node_ready.go:38] duration metric: took 5.004585481s for node "default-k8s-diff-port-969068" to be "Ready" ...
	I0804 00:15:49.988229   65441 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:49.994536   65441 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:50.001200   65441 pod_ready.go:92] pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:50.001229   65441 pod_ready.go:81] duration metric: took 6.665744ms for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:50.001243   65441 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:52.009436   65441 pod_ready.go:102] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:51.759772   64502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.400487256s)
	I0804 00:15:51.759836   64502 crio.go:469] duration metric: took 2.40064418s to extract the tarball
	I0804 00:15:51.759848   64502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:15:51.800038   64502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:51.845098   64502 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:15:51.845124   64502 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:15:51.845134   64502 kubeadm.go:934] updating node { 192.168.50.140 8443 v1.30.3 crio true true} ...
	I0804 00:15:51.845258   64502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-877598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:51.845339   64502 ssh_runner.go:195] Run: crio config
	I0804 00:15:51.895019   64502 cni.go:84] Creating CNI manager for ""
	I0804 00:15:51.895039   64502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:51.895048   64502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:51.895067   64502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.140 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-877598 NodeName:embed-certs-877598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:51.895202   64502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-877598"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:51.895272   64502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:15:51.906363   64502 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:51.906426   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:51.917727   64502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0804 00:15:51.936370   64502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:15:51.953894   64502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0804 00:15:51.972910   64502 ssh_runner.go:195] Run: grep 192.168.50.140	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:51.977288   64502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:51.990992   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:52.115808   64502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:52.133326   64502 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598 for IP: 192.168.50.140
	I0804 00:15:52.133373   64502 certs.go:194] generating shared ca certs ...
	I0804 00:15:52.133396   64502 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:52.133564   64502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:52.133613   64502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:52.133628   64502 certs.go:256] generating profile certs ...
	I0804 00:15:52.133736   64502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/client.key
	I0804 00:15:52.133824   64502 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.key.5511d337
	I0804 00:15:52.133873   64502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.key
	I0804 00:15:52.134013   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:52.134077   64502 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:52.134091   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:52.134130   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:52.134168   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:52.134200   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:52.134256   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:52.134880   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:52.175985   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:52.209458   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:52.239097   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:52.271037   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0804 00:15:52.317594   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:52.353485   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:52.382159   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:15:52.407478   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:52.433103   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:52.457233   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:52.481534   64502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:52.500482   64502 ssh_runner.go:195] Run: openssl version
	I0804 00:15:52.509021   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:52.522431   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.527125   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.527184   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.533310   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:52.546085   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:52.557781   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.562516   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.562587   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.568403   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:52.580431   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:52.592706   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.597280   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.597382   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.603284   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:52.616100   64502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:52.621422   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:52.631811   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:52.639130   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:52.646159   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:52.652721   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:52.659459   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:52.665864   64502 kubeadm.go:392] StartCluster: {Name:embed-certs-877598 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:52.665991   64502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:52.666044   64502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:52.711272   64502 cri.go:89] found id: ""
	I0804 00:15:52.711346   64502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:52.722294   64502 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:52.722321   64502 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:52.722380   64502 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:52.733148   64502 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:52.734706   64502 kubeconfig.go:125] found "embed-certs-877598" server: "https://192.168.50.140:8443"
	I0804 00:15:52.737995   64502 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:52.749941   64502 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.140
	I0804 00:15:52.749986   64502 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:52.749998   64502 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:52.750043   64502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:52.793295   64502 cri.go:89] found id: ""
	I0804 00:15:52.793388   64502 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:52.811438   64502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:52.824055   64502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:52.824080   64502 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:52.824128   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:15:52.835393   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:52.835446   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:52.846732   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:15:52.856889   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:52.856942   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:52.869951   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:15:52.881836   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:52.881909   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:52.894121   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:15:52.905643   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:52.905711   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:52.917063   64502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:52.929399   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:53.132145   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.096969   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.325640   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.385886   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.472926   64502 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:54.473002   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.973406   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.473410   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.578082   64502 api_server.go:72] duration metric: took 1.105154357s to wait for apiserver process to appear ...
	I0804 00:15:55.578170   64502 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:55.578207   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:55.578847   64502 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I0804 00:15:51.996447   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.496265   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.996030   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.996313   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.495823   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.996360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.496652   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.996049   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:55.996141   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:56.045001   64758 cri.go:89] found id: ""
	I0804 00:15:56.045031   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.045042   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:56.045049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:56.045114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:56.086505   64758 cri.go:89] found id: ""
	I0804 00:15:56.086535   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.086547   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:56.086554   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:56.086618   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:56.125953   64758 cri.go:89] found id: ""
	I0804 00:15:56.125983   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.125994   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:56.126001   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:56.126060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:56.167313   64758 cri.go:89] found id: ""
	I0804 00:15:56.167343   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.167354   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:56.167361   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:56.167424   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:56.211102   64758 cri.go:89] found id: ""
	I0804 00:15:56.211132   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.211142   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:56.211149   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:56.211231   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:56.246894   64758 cri.go:89] found id: ""
	I0804 00:15:56.246926   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.246937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:56.246945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:56.247012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:56.281952   64758 cri.go:89] found id: ""
	I0804 00:15:56.281980   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.281991   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:56.281998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:56.282060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:56.317685   64758 cri.go:89] found id: ""
	I0804 00:15:56.317719   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.317733   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:56.317745   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:56.317762   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:56.335040   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:56.335069   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:56.475995   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:56.476017   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:56.476033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:56.567508   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:56.567551   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:56.618136   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:56.618166   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:54.112928   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:56.112987   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:54.179330   65441 pod_ready.go:102] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:54.789712   65441 pod_ready.go:92] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.789738   65441 pod_ready.go:81] duration metric: took 4.788487591s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.789749   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.799762   65441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.799785   65441 pod_ready.go:81] duration metric: took 10.029856ms for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.799795   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.805685   65441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.805708   65441 pod_ready.go:81] duration metric: took 5.905108ms for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.805718   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.809797   65441 pod_ready.go:92] pod "kube-proxy-zz7fr" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.809818   65441 pod_ready.go:81] duration metric: took 4.094183ms for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.809827   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.820536   65441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.820557   65441 pod_ready.go:81] duration metric: took 10.722903ms for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.820567   65441 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:56.827543   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:56.078916   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:58.738609   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:58.738641   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:58.738657   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:58.772665   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:58.772695   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:59.079121   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:59.083798   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:59.083829   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:59.579242   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:59.585343   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:59.585381   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:00.078877   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:00.099981   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:00.100022   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:00.578505   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:00.582665   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:00.582692   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:59.172886   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:59.187045   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:59.187128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:59.225135   64758 cri.go:89] found id: ""
	I0804 00:15:59.225164   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.225173   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:59.225179   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:59.225255   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:59.262538   64758 cri.go:89] found id: ""
	I0804 00:15:59.262566   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.262573   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:59.262578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:59.262635   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:59.301665   64758 cri.go:89] found id: ""
	I0804 00:15:59.301697   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.301708   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:59.301715   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:59.301778   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:59.362742   64758 cri.go:89] found id: ""
	I0804 00:15:59.362766   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.362774   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:59.362779   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:59.362834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:59.404398   64758 cri.go:89] found id: ""
	I0804 00:15:59.404431   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.404509   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:59.404525   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:59.404594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:59.454257   64758 cri.go:89] found id: ""
	I0804 00:15:59.454285   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.454297   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:59.454305   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:59.454363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:59.496790   64758 cri.go:89] found id: ""
	I0804 00:15:59.496818   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.496829   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:59.496837   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:59.496896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:59.537395   64758 cri.go:89] found id: ""
	I0804 00:15:59.537424   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.537431   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:59.537439   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:59.537453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:59.600005   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:59.600042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:59.617304   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:59.617336   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:59.692828   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:59.692849   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:59.692864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:59.764000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:59.764038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:58.611600   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.110986   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.079326   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:01.083661   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:01.083689   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:01.578711   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:01.583011   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:01.583040   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:02.078606   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:02.083234   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0804 00:16:02.090079   64502 api_server.go:141] control plane version: v1.30.3
	I0804 00:16:02.090112   64502 api_server.go:131] duration metric: took 6.511921332s to wait for apiserver health ...
	I0804 00:16:02.090123   64502 cni.go:84] Creating CNI manager for ""
	I0804 00:16:02.090132   64502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:16:02.092169   64502 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:58.829268   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.327623   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:02.093704   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:16:02.109001   64502 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:16:02.131996   64502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:16:02.145300   64502 system_pods.go:59] 8 kube-system pods found
	I0804 00:16:02.145333   64502 system_pods.go:61] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:16:02.145340   64502 system_pods.go:61] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:16:02.145348   64502 system_pods.go:61] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:16:02.145370   64502 system_pods.go:61] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:16:02.145380   64502 system_pods.go:61] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:16:02.145389   64502 system_pods.go:61] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:16:02.145397   64502 system_pods.go:61] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:16:02.145403   64502 system_pods.go:61] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:16:02.145412   64502 system_pods.go:74] duration metric: took 13.393537ms to wait for pod list to return data ...
	I0804 00:16:02.145425   64502 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:16:02.149623   64502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:16:02.149651   64502 node_conditions.go:123] node cpu capacity is 2
	I0804 00:16:02.149661   64502 node_conditions.go:105] duration metric: took 4.231097ms to run NodePressure ...
	I0804 00:16:02.149677   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:16:02.424261   64502 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:16:02.429537   64502 kubeadm.go:739] kubelet initialised
	I0804 00:16:02.429555   64502 kubeadm.go:740] duration metric: took 5.269005ms waiting for restarted kubelet to initialise ...
	I0804 00:16:02.429563   64502 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:02.435433   64502 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.440580   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.440606   64502 pod_ready.go:81] duration metric: took 5.145511ms for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.440619   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.440628   64502 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.445111   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "etcd-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.445136   64502 pod_ready.go:81] duration metric: took 4.497361ms for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.445148   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "etcd-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.445157   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.450172   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.450200   64502 pod_ready.go:81] duration metric: took 5.032514ms for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.450211   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.450219   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.536314   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.536386   64502 pod_ready.go:81] duration metric: took 86.155481ms for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.536398   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.536409   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.935794   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-proxy-wk8zf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.935830   64502 pod_ready.go:81] duration metric: took 399.405535ms for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.935842   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-proxy-wk8zf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.935861   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:03.335730   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.335760   64502 pod_ready.go:81] duration metric: took 399.889478ms for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:03.335772   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.335780   64502 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:03.735762   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.735786   64502 pod_ready.go:81] duration metric: took 399.996995ms for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:03.735795   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.735802   64502 pod_ready.go:38] duration metric: took 1.306222891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:03.735818   64502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:16:03.748578   64502 ops.go:34] apiserver oom_adj: -16
	I0804 00:16:03.748602   64502 kubeadm.go:597] duration metric: took 11.026274037s to restartPrimaryControlPlane
	I0804 00:16:03.748611   64502 kubeadm.go:394] duration metric: took 11.082760058s to StartCluster
	I0804 00:16:03.748637   64502 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:16:03.748719   64502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:16:03.750554   64502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:16:03.750824   64502 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:16:03.750900   64502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:16:03.750998   64502 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-877598"
	I0804 00:16:03.751041   64502 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-877598"
	W0804 00:16:03.751053   64502 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:16:03.751051   64502 addons.go:69] Setting default-storageclass=true in profile "embed-certs-877598"
	I0804 00:16:03.751072   64502 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:16:03.751108   64502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-877598"
	I0804 00:16:03.751063   64502 addons.go:69] Setting metrics-server=true in profile "embed-certs-877598"
	I0804 00:16:03.751181   64502 addons.go:234] Setting addon metrics-server=true in "embed-certs-877598"
	W0804 00:16:03.751196   64502 addons.go:243] addon metrics-server should already be in state true
	I0804 00:16:03.751245   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.751467   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.751503   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.751540   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.751612   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.751088   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.751990   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.752017   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.752817   64502 out.go:177] * Verifying Kubernetes components...
	I0804 00:16:03.754613   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:16:03.769684   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I0804 00:16:03.769701   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37925
	I0804 00:16:03.769697   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0804 00:16:03.770197   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770332   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770619   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770792   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.770808   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.770935   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.770949   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.771125   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.771327   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.771520   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.771545   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.771555   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.771938   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.772138   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.772195   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.772521   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.772565   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.776267   64502 addons.go:234] Setting addon default-storageclass=true in "embed-certs-877598"
	W0804 00:16:03.776292   64502 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:16:03.776327   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.776695   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.776738   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.789183   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0804 00:16:03.789660   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.789796   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I0804 00:16:03.790184   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.790202   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.790246   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.790608   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.790869   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.790900   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.790985   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.791276   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.791519   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.793005   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.793338   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.795747   64502 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:16:03.795748   64502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:16:03.796208   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0804 00:16:03.796652   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.797194   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.797220   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.797589   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:16:03.797611   64502 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:16:03.797632   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.797640   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.797673   64502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:16:03.797684   64502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:16:03.797697   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.798266   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.798311   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.801933   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802083   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802417   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.802445   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802589   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.802766   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.802851   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.802868   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802936   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.803140   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.803166   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.803310   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.803409   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.803512   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.818073   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0804 00:16:03.818647   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.819107   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.819130   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.819488   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.819721   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.821982   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.822216   64502 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:16:03.822232   64502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:16:03.822251   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.825593   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.826055   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.826090   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.826356   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.826526   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.826667   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.826829   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.955019   64502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:16:03.976453   64502 node_ready.go:35] waiting up to 6m0s for node "embed-certs-877598" to be "Ready" ...
	I0804 00:16:04.051717   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:16:04.074720   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:16:04.074740   64502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:16:04.099578   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:16:04.099606   64502 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:16:04.118348   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:16:04.163390   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:16:04.163418   64502 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:16:04.227379   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:16:05.143364   64502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091613097s)
	I0804 00:16:05.143418   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143419   64502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.025041953s)
	I0804 00:16:05.143430   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143439   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143449   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143726   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.143743   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.143755   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143764   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143862   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.143893   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.143915   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.143935   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143964   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.144014   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.144033   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.144085   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.144259   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.144305   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.144319   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.150739   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.150761   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.151073   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.151102   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.151130   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.169806   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.169832   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.170103   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.170122   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.170148   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.170159   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.170171   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.170461   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.170546   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.170563   64502 addons.go:475] Verifying addon metrics-server=true in "embed-certs-877598"
	I0804 00:16:05.172575   64502 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0804 00:16:05.173964   64502 addons.go:510] duration metric: took 1.423065893s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0804 00:16:02.307325   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:02.324168   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:02.324233   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:02.370204   64758 cri.go:89] found id: ""
	I0804 00:16:02.370234   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.370250   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:02.370258   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:02.370325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:02.405586   64758 cri.go:89] found id: ""
	I0804 00:16:02.405616   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.405628   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:02.405636   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:02.405694   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:02.445644   64758 cri.go:89] found id: ""
	I0804 00:16:02.445665   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.445675   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:02.445682   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:02.445739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:02.483659   64758 cri.go:89] found id: ""
	I0804 00:16:02.483686   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.483695   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:02.483701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:02.483751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:02.519903   64758 cri.go:89] found id: ""
	I0804 00:16:02.519929   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.519938   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:02.519944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:02.519991   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:02.557373   64758 cri.go:89] found id: ""
	I0804 00:16:02.557401   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.557410   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:02.557416   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:02.557472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:02.594203   64758 cri.go:89] found id: ""
	I0804 00:16:02.594238   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.594249   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:02.594256   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:02.594316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:02.635487   64758 cri.go:89] found id: ""
	I0804 00:16:02.635512   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.635520   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:02.635529   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:02.635543   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:02.686990   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:02.687035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:02.701784   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:02.701810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:02.778626   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:02.778648   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:02.778662   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:02.856056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:02.856097   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:05.402858   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:05.418825   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:05.418900   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:05.458789   64758 cri.go:89] found id: ""
	I0804 00:16:05.458872   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.458887   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:05.458895   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:05.458967   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:05.498258   64758 cri.go:89] found id: ""
	I0804 00:16:05.498284   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.498295   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:05.498302   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:05.498364   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:05.540892   64758 cri.go:89] found id: ""
	I0804 00:16:05.540919   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.540927   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:05.540933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:05.540992   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:05.578876   64758 cri.go:89] found id: ""
	I0804 00:16:05.578911   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.578919   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:05.578924   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:05.578971   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:05.616248   64758 cri.go:89] found id: ""
	I0804 00:16:05.616272   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.616280   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:05.616285   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:05.616339   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:05.654387   64758 cri.go:89] found id: ""
	I0804 00:16:05.654419   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.654428   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:05.654436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:05.654528   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:05.695579   64758 cri.go:89] found id: ""
	I0804 00:16:05.695613   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.695625   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:05.695669   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:05.695752   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:05.740754   64758 cri.go:89] found id: ""
	I0804 00:16:05.740777   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.740785   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:05.740793   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:05.740805   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:05.792091   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:05.792126   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:05.809130   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:05.809164   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:05.888441   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:05.888465   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:05.888479   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:05.969336   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:05.969390   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:03.111834   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:05.613749   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:03.830570   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:06.328076   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:05.980692   64502 node_ready.go:53] node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:08.480205   64502 node_ready.go:53] node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:09.480127   64502 node_ready.go:49] node "embed-certs-877598" has status "Ready":"True"
	I0804 00:16:09.480147   64502 node_ready.go:38] duration metric: took 5.503660587s for node "embed-certs-877598" to be "Ready" ...
	I0804 00:16:09.480155   64502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:09.485704   64502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:09.491316   64502 pod_ready.go:92] pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:09.491340   64502 pod_ready.go:81] duration metric: took 5.611918ms for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:09.491348   64502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:08.514981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:08.531117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:08.531188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:08.569167   64758 cri.go:89] found id: ""
	I0804 00:16:08.569199   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.569210   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:08.569218   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:08.569282   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:08.608478   64758 cri.go:89] found id: ""
	I0804 00:16:08.608559   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.608572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:08.608580   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:08.608636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:08.645939   64758 cri.go:89] found id: ""
	I0804 00:16:08.645972   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.645983   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:08.645990   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:08.646050   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:08.685274   64758 cri.go:89] found id: ""
	I0804 00:16:08.685305   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.685316   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:08.685324   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:08.685400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:08.722314   64758 cri.go:89] found id: ""
	I0804 00:16:08.722345   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.722357   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:08.722363   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:08.722427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:08.758577   64758 cri.go:89] found id: ""
	I0804 00:16:08.758606   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.758617   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:08.758624   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:08.758685   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.798734   64758 cri.go:89] found id: ""
	I0804 00:16:08.798761   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.798773   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:08.798781   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:08.798842   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:08.837577   64758 cri.go:89] found id: ""
	I0804 00:16:08.837600   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.837608   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:08.837616   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:08.837627   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:08.894426   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:08.894465   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:08.909851   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:08.909879   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:08.989858   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:08.989878   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:08.989893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:09.081056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:09.081098   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:11.627914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:11.641805   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:11.641896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:11.679002   64758 cri.go:89] found id: ""
	I0804 00:16:11.679028   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.679036   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:11.679042   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:11.679090   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:11.720188   64758 cri.go:89] found id: ""
	I0804 00:16:11.720220   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.720236   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:11.720245   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:11.720307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:11.760085   64758 cri.go:89] found id: ""
	I0804 00:16:11.760118   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.760130   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:11.760138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:11.760198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:11.796220   64758 cri.go:89] found id: ""
	I0804 00:16:11.796249   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.796266   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:11.796274   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:11.796335   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:11.834216   64758 cri.go:89] found id: ""
	I0804 00:16:11.834243   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.834253   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:11.834260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:11.834336   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:11.869205   64758 cri.go:89] found id: ""
	I0804 00:16:11.869230   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.869237   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:11.869243   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:11.869301   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.110499   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:10.618011   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:08.827284   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:10.828942   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:11.498264   64502 pod_ready.go:102] pod "etcd-embed-certs-877598" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:12.498916   64502 pod_ready.go:92] pod "etcd-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:12.498949   64502 pod_ready.go:81] duration metric: took 3.007593153s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:12.498961   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.562862   64502 pod_ready.go:92] pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.562896   64502 pod_ready.go:81] duration metric: took 2.063926324s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.562910   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.573628   64502 pod_ready.go:92] pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.573655   64502 pod_ready.go:81] duration metric: took 10.735916ms for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.573670   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.583241   64502 pod_ready.go:92] pod "kube-proxy-wk8zf" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.583266   64502 pod_ready.go:81] duration metric: took 9.588875ms for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.583278   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.593419   64502 pod_ready.go:92] pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.593445   64502 pod_ready.go:81] duration metric: took 10.158665ms for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.593457   64502 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:11.912091   64758 cri.go:89] found id: ""
	I0804 00:16:11.912120   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.912132   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:11.912145   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:11.912203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:11.949570   64758 cri.go:89] found id: ""
	I0804 00:16:11.949603   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.949614   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:11.949625   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:11.949643   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:12.006542   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:12.006575   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:12.022435   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:12.022474   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:12.101007   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:12.101032   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:12.101057   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:12.183836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:12.183876   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:14.725345   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:14.738389   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:14.738464   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:14.780103   64758 cri.go:89] found id: ""
	I0804 00:16:14.780133   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.780142   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:14.780147   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:14.780197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:14.817811   64758 cri.go:89] found id: ""
	I0804 00:16:14.817847   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.817863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:14.817872   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:14.817946   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:14.854450   64758 cri.go:89] found id: ""
	I0804 00:16:14.854478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.854488   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:14.854495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:14.854561   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:14.891862   64758 cri.go:89] found id: ""
	I0804 00:16:14.891891   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.891900   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:14.891905   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:14.891958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:14.928450   64758 cri.go:89] found id: ""
	I0804 00:16:14.928478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.928488   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:14.928495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:14.928554   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:14.965820   64758 cri.go:89] found id: ""
	I0804 00:16:14.965848   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.965860   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:14.965867   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:14.965945   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:15.008725   64758 cri.go:89] found id: ""
	I0804 00:16:15.008874   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.008888   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:15.008897   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:15.008957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:15.044618   64758 cri.go:89] found id: ""
	I0804 00:16:15.044768   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.044792   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:15.044802   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:15.044815   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:15.102786   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:15.102825   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:15.118305   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:15.118347   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:15.196397   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:15.196420   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:15.196435   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:15.277941   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:15.277986   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:13.110969   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:15.112546   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:13.327840   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:15.826447   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:16.600315   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.099064   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:17.819354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:17.834271   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:17.834332   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:17.870930   64758 cri.go:89] found id: ""
	I0804 00:16:17.870961   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.870973   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:17.870980   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:17.871040   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:17.907980   64758 cri.go:89] found id: ""
	I0804 00:16:17.908007   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.908016   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:17.908021   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:17.908067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:17.943257   64758 cri.go:89] found id: ""
	I0804 00:16:17.943284   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.943295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:17.943301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:17.943363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:17.982297   64758 cri.go:89] found id: ""
	I0804 00:16:17.982328   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.982338   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:17.982345   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:17.982405   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:18.022780   64758 cri.go:89] found id: ""
	I0804 00:16:18.022810   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.022841   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:18.022850   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:18.022913   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:18.061891   64758 cri.go:89] found id: ""
	I0804 00:16:18.061926   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.061937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:18.061945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:18.062012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:18.100807   64758 cri.go:89] found id: ""
	I0804 00:16:18.100845   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.100855   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:18.100862   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:18.100917   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:18.142011   64758 cri.go:89] found id: ""
	I0804 00:16:18.142044   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.142056   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:18.142066   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:18.142090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:18.195476   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:18.195511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:18.209661   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:18.209690   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:18.282638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:18.282657   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:18.282669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:18.363900   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:18.363938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:20.908753   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:20.922878   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:20.922962   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:20.961013   64758 cri.go:89] found id: ""
	I0804 00:16:20.961041   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.961052   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:20.961058   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:20.961109   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:20.998027   64758 cri.go:89] found id: ""
	I0804 00:16:20.998059   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.998068   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:20.998074   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:20.998121   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:21.035640   64758 cri.go:89] found id: ""
	I0804 00:16:21.035669   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.035680   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:21.035688   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:21.035751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:21.075737   64758 cri.go:89] found id: ""
	I0804 00:16:21.075770   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.075779   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:21.075786   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:21.075846   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:21.120024   64758 cri.go:89] found id: ""
	I0804 00:16:21.120046   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.120054   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:21.120061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:21.120126   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:21.160796   64758 cri.go:89] found id: ""
	I0804 00:16:21.160821   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.160840   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:21.160847   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:21.160907   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:21.195519   64758 cri.go:89] found id: ""
	I0804 00:16:21.195547   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.195558   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:21.195566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:21.195629   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:21.236193   64758 cri.go:89] found id: ""
	I0804 00:16:21.236222   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.236232   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:21.236243   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:21.236258   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:21.295154   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:21.295198   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:21.309540   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:21.309566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:21.389391   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:21.389416   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:21.389433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:21.472771   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:21.472808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:17.611366   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.612092   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:17.827036   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.827655   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:21.828026   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:21.101899   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:23.601687   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.018923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:24.032954   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:24.033013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:24.073677   64758 cri.go:89] found id: ""
	I0804 00:16:24.073703   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.073711   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:24.073716   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:24.073777   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:24.115752   64758 cri.go:89] found id: ""
	I0804 00:16:24.115775   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.115785   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:24.115792   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:24.115849   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:24.152967   64758 cri.go:89] found id: ""
	I0804 00:16:24.153001   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.153017   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:24.153024   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:24.153098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:24.190557   64758 cri.go:89] found id: ""
	I0804 00:16:24.190581   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.190589   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:24.190595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:24.190643   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:24.229312   64758 cri.go:89] found id: ""
	I0804 00:16:24.229341   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.229351   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:24.229373   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:24.229437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:24.265076   64758 cri.go:89] found id: ""
	I0804 00:16:24.265100   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.265107   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:24.265113   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:24.265167   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:24.306508   64758 cri.go:89] found id: ""
	I0804 00:16:24.306534   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.306542   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:24.306547   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:24.306598   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:24.350714   64758 cri.go:89] found id: ""
	I0804 00:16:24.350747   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.350759   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:24.350770   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:24.350785   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:24.366188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:24.366216   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:24.438410   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:24.438431   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:24.438447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:24.522635   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:24.522669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:24.562647   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:24.562678   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:22.110420   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.111399   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.613839   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.327982   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.826914   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.099435   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:28.099896   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:30.100659   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:27.119437   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:27.133330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:27.133426   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:27.170001   64758 cri.go:89] found id: ""
	I0804 00:16:27.170039   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.170048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:27.170054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:27.170112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:27.205811   64758 cri.go:89] found id: ""
	I0804 00:16:27.205843   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.205854   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:27.205861   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:27.205922   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:27.247249   64758 cri.go:89] found id: ""
	I0804 00:16:27.247278   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.247287   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:27.247294   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:27.247360   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:27.285659   64758 cri.go:89] found id: ""
	I0804 00:16:27.285688   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.285697   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:27.285703   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:27.285774   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:27.321039   64758 cri.go:89] found id: ""
	I0804 00:16:27.321066   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.321075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:27.321084   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:27.321130   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:27.359947   64758 cri.go:89] found id: ""
	I0804 00:16:27.359977   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.359988   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:27.359996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:27.360056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:27.401408   64758 cri.go:89] found id: ""
	I0804 00:16:27.401432   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.401440   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:27.401449   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:27.401495   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:27.437297   64758 cri.go:89] found id: ""
	I0804 00:16:27.437326   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.437337   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:27.437347   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:27.437373   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:27.490594   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:27.490639   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:27.505993   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:27.506021   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:27.588779   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:27.588804   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:27.588820   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:27.681557   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:27.681592   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.225062   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:30.239475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:30.239540   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:30.283896   64758 cri.go:89] found id: ""
	I0804 00:16:30.283923   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.283931   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:30.283938   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:30.284013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:30.321506   64758 cri.go:89] found id: ""
	I0804 00:16:30.321532   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.321539   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:30.321545   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:30.321593   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:30.358314   64758 cri.go:89] found id: ""
	I0804 00:16:30.358340   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.358347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:30.358353   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:30.358400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:30.393561   64758 cri.go:89] found id: ""
	I0804 00:16:30.393587   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.393595   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:30.393600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:30.393646   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:30.429907   64758 cri.go:89] found id: ""
	I0804 00:16:30.429935   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.429943   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:30.429949   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:30.430008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:30.466305   64758 cri.go:89] found id: ""
	I0804 00:16:30.466332   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.466342   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:30.466350   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:30.466408   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:30.505384   64758 cri.go:89] found id: ""
	I0804 00:16:30.505413   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.505424   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:30.505431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:30.505492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:30.541756   64758 cri.go:89] found id: ""
	I0804 00:16:30.541786   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.541796   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:30.541806   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:30.541821   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:30.555516   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:30.555554   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:30.627442   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:30.627463   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:30.627473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:30.701452   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:30.701489   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.743436   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:30.743473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:29.111149   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:31.111470   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:29.327268   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:31.328424   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:32.605884   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:34.608119   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:33.298898   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:33.315211   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:33.315292   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:33.353171   64758 cri.go:89] found id: ""
	I0804 00:16:33.353207   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.353220   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:33.353229   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:33.353297   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:33.389767   64758 cri.go:89] found id: ""
	I0804 00:16:33.389792   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.389799   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:33.389805   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:33.389851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:33.446889   64758 cri.go:89] found id: ""
	I0804 00:16:33.446928   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.446939   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:33.446946   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:33.447004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:33.487340   64758 cri.go:89] found id: ""
	I0804 00:16:33.487362   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.487370   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:33.487376   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:33.487423   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:33.530398   64758 cri.go:89] found id: ""
	I0804 00:16:33.530421   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.530429   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:33.530435   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:33.530483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:33.568725   64758 cri.go:89] found id: ""
	I0804 00:16:33.568753   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.568762   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:33.568769   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:33.568818   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:33.607205   64758 cri.go:89] found id: ""
	I0804 00:16:33.607232   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.607242   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:33.607249   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:33.607311   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:33.648188   64758 cri.go:89] found id: ""
	I0804 00:16:33.648220   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.648230   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:33.648240   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:33.648256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:33.700231   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:33.700266   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:33.714899   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:33.714932   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:33.794306   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:33.794326   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:33.794340   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.872446   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:33.872482   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.415000   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:36.428920   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:36.428996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:36.464784   64758 cri.go:89] found id: ""
	I0804 00:16:36.464810   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.464817   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:36.464823   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:36.464925   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:36.501394   64758 cri.go:89] found id: ""
	I0804 00:16:36.501423   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.501431   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:36.501437   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:36.501497   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:36.537049   64758 cri.go:89] found id: ""
	I0804 00:16:36.537079   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.537090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:36.537102   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:36.537173   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:36.573956   64758 cri.go:89] found id: ""
	I0804 00:16:36.573986   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.573997   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:36.574004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:36.574065   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:36.612996   64758 cri.go:89] found id: ""
	I0804 00:16:36.613016   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.613023   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:36.613029   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:36.613083   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:36.652346   64758 cri.go:89] found id: ""
	I0804 00:16:36.652367   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.652374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:36.652380   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:36.652437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:36.690073   64758 cri.go:89] found id: ""
	I0804 00:16:36.690100   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.690110   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:36.690119   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:36.690182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:36.732436   64758 cri.go:89] found id: ""
	I0804 00:16:36.732466   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.732477   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:36.732487   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:36.732505   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:36.746036   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:36.746060   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:36.818141   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:36.818164   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:36.818179   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.611181   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:35.611691   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:33.329719   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:35.330172   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:37.100705   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:39.603600   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:36.907689   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:36.907732   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.947104   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:36.947135   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.502960   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:39.516340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:39.516414   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:39.555903   64758 cri.go:89] found id: ""
	I0804 00:16:39.555929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.555939   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:39.555946   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:39.556004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:39.599791   64758 cri.go:89] found id: ""
	I0804 00:16:39.599816   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.599827   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:39.599834   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:39.599894   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:39.642903   64758 cri.go:89] found id: ""
	I0804 00:16:39.642929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.642936   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:39.642944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:39.643004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:39.678667   64758 cri.go:89] found id: ""
	I0804 00:16:39.678693   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.678702   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:39.678709   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:39.678757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:39.716888   64758 cri.go:89] found id: ""
	I0804 00:16:39.716916   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.716926   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:39.716933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:39.717001   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:39.751576   64758 cri.go:89] found id: ""
	I0804 00:16:39.751602   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.751610   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:39.751616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:39.751664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:39.794026   64758 cri.go:89] found id: ""
	I0804 00:16:39.794056   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.794067   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:39.794087   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:39.794158   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:39.841426   64758 cri.go:89] found id: ""
	I0804 00:16:39.841454   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.841464   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:39.841474   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:39.841492   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.902579   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:39.902616   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:39.924467   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:39.924495   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:40.001318   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:40.001345   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:40.001377   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:40.081520   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:40.081552   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:38.111443   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:40.610810   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:37.827851   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:39.828752   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.327716   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.100037   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:44.100850   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.623094   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:42.636523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:42.636594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:42.674188   64758 cri.go:89] found id: ""
	I0804 00:16:42.674218   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.674226   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:42.674231   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:42.674277   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:42.708496   64758 cri.go:89] found id: ""
	I0804 00:16:42.708522   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.708532   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:42.708539   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:42.708601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:42.751050   64758 cri.go:89] found id: ""
	I0804 00:16:42.751087   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.751100   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:42.751107   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:42.751170   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:42.788520   64758 cri.go:89] found id: ""
	I0804 00:16:42.788546   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.788555   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:42.788560   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:42.788619   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:42.828273   64758 cri.go:89] found id: ""
	I0804 00:16:42.828297   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.828304   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:42.828309   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:42.828356   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:42.867754   64758 cri.go:89] found id: ""
	I0804 00:16:42.867784   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.867799   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:42.867807   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:42.867864   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:42.903945   64758 cri.go:89] found id: ""
	I0804 00:16:42.903977   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.903988   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:42.903996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:42.904059   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:42.942477   64758 cri.go:89] found id: ""
	I0804 00:16:42.942518   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.942539   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:42.942549   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:42.942565   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:42.981776   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:42.981810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:43.037601   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:43.037634   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:43.052719   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:43.052746   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:43.122664   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:43.122688   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:43.122702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:45.701275   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:45.714532   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:45.714607   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:45.750932   64758 cri.go:89] found id: ""
	I0804 00:16:45.750955   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.750986   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:45.750991   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:45.751042   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:45.787348   64758 cri.go:89] found id: ""
	I0804 00:16:45.787373   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.787381   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:45.787387   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:45.787441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:45.823390   64758 cri.go:89] found id: ""
	I0804 00:16:45.823419   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.823429   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:45.823436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:45.823498   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:45.861400   64758 cri.go:89] found id: ""
	I0804 00:16:45.861430   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.861440   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:45.861448   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:45.861508   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:45.898992   64758 cri.go:89] found id: ""
	I0804 00:16:45.899024   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.899036   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:45.899043   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:45.899110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:45.934542   64758 cri.go:89] found id: ""
	I0804 00:16:45.934570   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.934582   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:45.934589   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:45.934648   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:45.967908   64758 cri.go:89] found id: ""
	I0804 00:16:45.967938   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.967949   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:45.967957   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:45.968018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:46.006475   64758 cri.go:89] found id: ""
	I0804 00:16:46.006504   64758 logs.go:276] 0 containers: []
	W0804 00:16:46.006516   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:46.006526   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:46.006541   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:46.058760   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:46.058793   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:46.074753   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:46.074777   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:46.149634   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:46.149655   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:46.149671   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:46.230104   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:46.230140   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:43.111492   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:45.611224   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:44.827683   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:47.326999   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:46.600307   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:49.100532   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:48.772224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:48.785848   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:48.785935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.825206   64758 cri.go:89] found id: ""
	I0804 00:16:48.825232   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.825242   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:48.825249   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:48.825315   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:48.861559   64758 cri.go:89] found id: ""
	I0804 00:16:48.861588   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.861599   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:48.861607   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:48.861675   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:48.903375   64758 cri.go:89] found id: ""
	I0804 00:16:48.903401   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.903412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:48.903419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:48.903480   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:48.940708   64758 cri.go:89] found id: ""
	I0804 00:16:48.940736   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.940748   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:48.940755   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:48.940817   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:48.976190   64758 cri.go:89] found id: ""
	I0804 00:16:48.976218   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.976228   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:48.976236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:48.976291   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:49.010393   64758 cri.go:89] found id: ""
	I0804 00:16:49.010423   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.010434   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:49.010442   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:49.010506   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:49.046670   64758 cri.go:89] found id: ""
	I0804 00:16:49.046698   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.046707   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:49.046711   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:49.046759   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:49.085254   64758 cri.go:89] found id: ""
	I0804 00:16:49.085284   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.085293   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:49.085302   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:49.085314   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:49.142402   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:49.142433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:49.157063   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:49.157092   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:49.233808   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:49.233829   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:49.233841   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:49.320355   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:49.320395   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:51.862548   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:51.875679   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:51.875750   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.110954   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:50.111867   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:49.327109   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.327920   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.600258   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:53.601052   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.911400   64758 cri.go:89] found id: ""
	I0804 00:16:51.911427   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.911437   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:51.911444   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:51.911505   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:51.948825   64758 cri.go:89] found id: ""
	I0804 00:16:51.948853   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.948863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:51.948870   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:51.948935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:51.989458   64758 cri.go:89] found id: ""
	I0804 00:16:51.989488   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.989499   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:51.989506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:51.989568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:52.026663   64758 cri.go:89] found id: ""
	I0804 00:16:52.026685   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.026693   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:52.026698   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:52.026754   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:52.066089   64758 cri.go:89] found id: ""
	I0804 00:16:52.066115   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.066127   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:52.066135   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:52.066198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:52.102159   64758 cri.go:89] found id: ""
	I0804 00:16:52.102185   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.102196   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:52.102203   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:52.102258   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:52.144239   64758 cri.go:89] found id: ""
	I0804 00:16:52.144266   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.144276   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:52.144283   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:52.144344   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:52.180679   64758 cri.go:89] found id: ""
	I0804 00:16:52.180708   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.180717   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:52.180725   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:52.180738   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:52.262074   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:52.262116   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.305913   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:52.305948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:52.357044   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:52.357081   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:52.372090   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:52.372119   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:52.444148   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:54.944910   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:54.958182   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:54.958239   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:54.993629   64758 cri.go:89] found id: ""
	I0804 00:16:54.993657   64758 logs.go:276] 0 containers: []
	W0804 00:16:54.993668   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:54.993675   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:54.993734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:55.029270   64758 cri.go:89] found id: ""
	I0804 00:16:55.029299   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.029310   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:55.029317   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:55.029393   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:55.067923   64758 cri.go:89] found id: ""
	I0804 00:16:55.067951   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.067961   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:55.067968   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:55.068027   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:55.107533   64758 cri.go:89] found id: ""
	I0804 00:16:55.107556   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.107565   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:55.107572   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:55.107633   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:55.143828   64758 cri.go:89] found id: ""
	I0804 00:16:55.143856   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.143868   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:55.143875   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:55.143940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:55.177960   64758 cri.go:89] found id: ""
	I0804 00:16:55.178015   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.178030   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:55.178038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:55.178112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:55.217457   64758 cri.go:89] found id: ""
	I0804 00:16:55.217481   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.217488   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:55.217494   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:55.217538   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:55.259862   64758 cri.go:89] found id: ""
	I0804 00:16:55.259890   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.259898   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:55.259907   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:55.259918   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:55.311566   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:55.311598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:55.327833   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:55.327866   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:55.406475   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:55.406495   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:55.406511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:55.484586   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:55.484618   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.610982   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:54.611276   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:56.611515   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:53.827394   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:55.827945   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:56.099238   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.100223   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:00.599870   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.028251   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:58.042169   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:58.042236   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:58.076836   64758 cri.go:89] found id: ""
	I0804 00:16:58.076859   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.076868   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:58.076873   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:58.076937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:58.115989   64758 cri.go:89] found id: ""
	I0804 00:16:58.116019   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.116031   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:58.116037   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:58.116099   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:58.155049   64758 cri.go:89] found id: ""
	I0804 00:16:58.155079   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.155090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:58.155097   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:58.155160   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:58.190257   64758 cri.go:89] found id: ""
	I0804 00:16:58.190293   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.190305   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:58.190315   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:58.190370   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:58.225001   64758 cri.go:89] found id: ""
	I0804 00:16:58.225029   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.225038   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:58.225061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:58.225118   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:58.268881   64758 cri.go:89] found id: ""
	I0804 00:16:58.268925   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.268937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:58.268945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:58.269010   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:58.305223   64758 cri.go:89] found id: ""
	I0804 00:16:58.305253   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.305269   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:58.305277   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:58.305340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:58.340517   64758 cri.go:89] found id: ""
	I0804 00:16:58.340548   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.340559   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:58.340570   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:58.340584   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:58.355372   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:58.355403   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:58.426292   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:58.426312   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:58.426326   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:58.509990   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:58.510034   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.550957   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:58.550988   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.104806   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:01.119379   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:01.119453   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:01.158376   64758 cri.go:89] found id: ""
	I0804 00:17:01.158407   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.158419   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:01.158426   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:01.158484   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:01.193826   64758 cri.go:89] found id: ""
	I0804 00:17:01.193858   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.193869   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:01.193876   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:01.193937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:01.228566   64758 cri.go:89] found id: ""
	I0804 00:17:01.228588   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.228600   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:01.228607   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:01.228667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:01.265736   64758 cri.go:89] found id: ""
	I0804 00:17:01.265762   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.265772   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:01.265778   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:01.265834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:01.302655   64758 cri.go:89] found id: ""
	I0804 00:17:01.302679   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.302694   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:01.302699   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:01.302753   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:01.340191   64758 cri.go:89] found id: ""
	I0804 00:17:01.340218   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.340226   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:01.340236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:01.340294   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:01.375767   64758 cri.go:89] found id: ""
	I0804 00:17:01.375789   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.375797   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:01.375802   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:01.375875   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:01.412446   64758 cri.go:89] found id: ""
	I0804 00:17:01.412479   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.412490   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:01.412502   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:01.412518   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.466271   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:01.466309   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:01.480800   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:01.480838   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:01.547909   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:01.547932   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:01.547948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:01.628318   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:01.628351   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.611854   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:01.111626   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.326831   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:00.327154   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:02.328038   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:02.601960   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:05.099489   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:04.175883   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:04.189038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:04.189098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:04.229126   64758 cri.go:89] found id: ""
	I0804 00:17:04.229158   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.229167   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:04.229174   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:04.229235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:04.264107   64758 cri.go:89] found id: ""
	I0804 00:17:04.264134   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.264142   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:04.264147   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:04.264203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:04.299959   64758 cri.go:89] found id: ""
	I0804 00:17:04.299996   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.300004   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:04.300010   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:04.300056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:04.337978   64758 cri.go:89] found id: ""
	I0804 00:17:04.338006   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.338016   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:04.338023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:04.338081   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:04.377969   64758 cri.go:89] found id: ""
	I0804 00:17:04.377993   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.378001   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:04.378006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:04.378068   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:04.413036   64758 cri.go:89] found id: ""
	I0804 00:17:04.413062   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.413071   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:04.413078   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:04.413140   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:04.450387   64758 cri.go:89] found id: ""
	I0804 00:17:04.450417   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.450426   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:04.450431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:04.450488   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:04.490132   64758 cri.go:89] found id: ""
	I0804 00:17:04.490165   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.490177   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:04.490188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:04.490204   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:04.560633   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:04.560653   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:04.560668   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:04.639409   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:04.639445   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:04.682479   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:04.682512   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:04.734823   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:04.734857   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:03.112357   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:05.610907   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:04.828050   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.327249   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.099893   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:09.100093   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.250174   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:07.263523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:07.263599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:07.300095   64758 cri.go:89] found id: ""
	I0804 00:17:07.300124   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.300136   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:07.300144   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:07.300211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:07.337798   64758 cri.go:89] found id: ""
	I0804 00:17:07.337824   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.337846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:07.337851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:07.337902   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:07.375305   64758 cri.go:89] found id: ""
	I0804 00:17:07.375337   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.375348   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:07.375356   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:07.375406   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:07.411603   64758 cri.go:89] found id: ""
	I0804 00:17:07.411629   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.411639   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:07.411646   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:07.411704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:07.450478   64758 cri.go:89] found id: ""
	I0804 00:17:07.450502   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.450511   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:07.450518   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:07.450564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:07.489972   64758 cri.go:89] found id: ""
	I0804 00:17:07.489997   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.490006   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:07.490012   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:07.490073   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:07.523685   64758 cri.go:89] found id: ""
	I0804 00:17:07.523713   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.523725   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:07.523732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:07.523789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:07.562636   64758 cri.go:89] found id: ""
	I0804 00:17:07.562665   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.562675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:07.562686   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:07.562702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:07.647968   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:07.648004   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.689829   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:07.689856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:07.738333   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:07.738366   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:07.753419   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:07.753448   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:07.829678   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.329981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:10.343676   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:10.343743   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:10.379546   64758 cri.go:89] found id: ""
	I0804 00:17:10.379575   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.379586   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:10.379594   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:10.379657   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:10.416247   64758 cri.go:89] found id: ""
	I0804 00:17:10.416271   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.416279   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:10.416284   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:10.416340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:10.455261   64758 cri.go:89] found id: ""
	I0804 00:17:10.455291   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.455303   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:10.455310   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:10.455373   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:10.493220   64758 cri.go:89] found id: ""
	I0804 00:17:10.493251   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.493262   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:10.493270   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:10.493329   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:10.538682   64758 cri.go:89] found id: ""
	I0804 00:17:10.538709   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.538720   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:10.538727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:10.538787   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:10.575509   64758 cri.go:89] found id: ""
	I0804 00:17:10.575535   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.575546   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:10.575553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:10.575609   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:10.613163   64758 cri.go:89] found id: ""
	I0804 00:17:10.613188   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.613196   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:10.613201   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:10.613260   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:10.648914   64758 cri.go:89] found id: ""
	I0804 00:17:10.648940   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.648947   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:10.648956   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:10.648968   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:10.700151   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:10.700187   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:10.714971   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:10.714998   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:10.787679   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.787698   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:10.787710   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:10.865008   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:10.865048   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.611770   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:10.110299   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:09.327569   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:11.327855   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:11.603427   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:14.100524   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:13.406150   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:13.419602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:13.419659   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:13.456823   64758 cri.go:89] found id: ""
	I0804 00:17:13.456852   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.456863   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:13.456870   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:13.456935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:13.493527   64758 cri.go:89] found id: ""
	I0804 00:17:13.493556   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.493567   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:13.493574   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:13.493697   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:13.529745   64758 cri.go:89] found id: ""
	I0804 00:17:13.529770   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.529784   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:13.529790   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:13.529856   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:13.567775   64758 cri.go:89] found id: ""
	I0804 00:17:13.567811   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.567819   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:13.567824   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:13.567888   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:13.604638   64758 cri.go:89] found id: ""
	I0804 00:17:13.604670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.604678   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:13.604685   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:13.604741   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:13.646638   64758 cri.go:89] found id: ""
	I0804 00:17:13.646670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.646679   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:13.646684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:13.646730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:13.694656   64758 cri.go:89] found id: ""
	I0804 00:17:13.694682   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.694693   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:13.694701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:13.694761   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:13.733738   64758 cri.go:89] found id: ""
	I0804 00:17:13.733762   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.733771   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:13.733780   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:13.733792   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:13.749747   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:13.749775   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:13.832826   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:13.832852   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:13.832868   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:13.914198   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:13.914233   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:13.952753   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:13.952787   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.503600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:16.516932   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:16.517004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:16.552012   64758 cri.go:89] found id: ""
	I0804 00:17:16.552037   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.552046   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:16.552052   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:16.552110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:16.590626   64758 cri.go:89] found id: ""
	I0804 00:17:16.590653   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.590660   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:16.590666   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:16.590732   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:16.628684   64758 cri.go:89] found id: ""
	I0804 00:17:16.628712   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.628723   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:16.628729   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:16.628792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:16.664934   64758 cri.go:89] found id: ""
	I0804 00:17:16.664969   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.664980   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:16.664987   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:16.665054   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:16.700098   64758 cri.go:89] found id: ""
	I0804 00:17:16.700127   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.700138   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:16.700144   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:16.700214   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:16.736761   64758 cri.go:89] found id: ""
	I0804 00:17:16.736786   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.736795   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:16.736800   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:16.736863   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:16.780010   64758 cri.go:89] found id: ""
	I0804 00:17:16.780033   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.780045   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:16.780050   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:16.780106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:16.816079   64758 cri.go:89] found id: ""
	I0804 00:17:16.816103   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.816112   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:16.816122   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:16.816136   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.866526   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:16.866560   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:16.881254   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:16.881287   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:17:12.610907   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:14.610978   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.611860   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:13.827860   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.327167   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.601482   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:19.100152   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	W0804 00:17:16.952491   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:16.952515   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:16.952530   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:17.038943   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:17.038977   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:19.580078   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:19.595538   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:19.595601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:19.632206   64758 cri.go:89] found id: ""
	I0804 00:17:19.632234   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.632245   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:19.632252   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:19.632307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:19.670335   64758 cri.go:89] found id: ""
	I0804 00:17:19.670362   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.670377   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:19.670388   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:19.670447   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:19.707772   64758 cri.go:89] found id: ""
	I0804 00:17:19.707801   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.707812   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:19.707818   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:19.707877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:19.743822   64758 cri.go:89] found id: ""
	I0804 00:17:19.743855   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.743867   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:19.743874   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:19.743930   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:19.781592   64758 cri.go:89] found id: ""
	I0804 00:17:19.781622   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.781632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:19.781640   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:19.781698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:19.818792   64758 cri.go:89] found id: ""
	I0804 00:17:19.818815   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.818823   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:19.818829   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:19.818877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:19.856486   64758 cri.go:89] found id: ""
	I0804 00:17:19.856511   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.856522   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:19.856528   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:19.856586   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:19.901721   64758 cri.go:89] found id: ""
	I0804 00:17:19.901743   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.901754   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:19.901764   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:19.901780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:19.980095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:19.980119   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:19.980134   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:20.072699   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:20.072750   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:20.159007   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:20.159038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:20.211785   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:20.211818   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:19.110218   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:21.110572   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:18.828527   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:20.828554   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:21.600968   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:23.602526   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.603220   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:22.727235   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:22.740922   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:22.740996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:22.780356   64758 cri.go:89] found id: ""
	I0804 00:17:22.780381   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.780392   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:22.780400   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:22.780459   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:22.817075   64758 cri.go:89] found id: ""
	I0804 00:17:22.817100   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.817111   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:22.817119   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:22.817182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:22.857213   64758 cri.go:89] found id: ""
	I0804 00:17:22.857243   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.857253   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:22.857260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:22.857325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:22.894049   64758 cri.go:89] found id: ""
	I0804 00:17:22.894085   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.894096   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:22.894104   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:22.894171   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:22.929718   64758 cri.go:89] found id: ""
	I0804 00:17:22.929746   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.929756   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:22.929770   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:22.929843   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:22.964863   64758 cri.go:89] found id: ""
	I0804 00:17:22.964892   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.964901   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:22.964907   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:22.964958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:23.002565   64758 cri.go:89] found id: ""
	I0804 00:17:23.002593   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.002603   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:23.002611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:23.002676   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:23.038161   64758 cri.go:89] found id: ""
	I0804 00:17:23.038188   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.038199   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:23.038211   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:23.038224   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:23.091865   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:23.091903   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:23.108358   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:23.108388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:23.186417   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:23.186438   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:23.186453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.269119   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:23.269161   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:25.812405   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:25.833174   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:25.833253   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:25.881654   64758 cri.go:89] found id: ""
	I0804 00:17:25.881681   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.881690   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:25.881696   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:25.881757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:25.936968   64758 cri.go:89] found id: ""
	I0804 00:17:25.936997   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.937006   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:25.937011   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:25.937066   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:25.972437   64758 cri.go:89] found id: ""
	I0804 00:17:25.972462   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.972470   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:25.972475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:25.972529   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:26.008306   64758 cri.go:89] found id: ""
	I0804 00:17:26.008346   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.008357   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:26.008366   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:26.008435   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:26.045593   64758 cri.go:89] found id: ""
	I0804 00:17:26.045620   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.045632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:26.045639   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:26.045696   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:26.084170   64758 cri.go:89] found id: ""
	I0804 00:17:26.084195   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.084205   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:26.084212   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:26.084272   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:26.122524   64758 cri.go:89] found id: ""
	I0804 00:17:26.122551   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.122559   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:26.122565   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:26.122623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:26.159264   64758 cri.go:89] found id: ""
	I0804 00:17:26.159297   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.159308   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:26.159320   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:26.159337   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:26.205692   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:26.205718   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:26.257286   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:26.257321   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:26.271582   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:26.271611   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:26.344562   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:26.344586   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:26.344598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.112800   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.610507   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:23.327294   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.828519   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:28.100160   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:30.100351   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:28.929410   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:28.943941   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:28.944003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:28.986127   64758 cri.go:89] found id: ""
	I0804 00:17:28.986157   64758 logs.go:276] 0 containers: []
	W0804 00:17:28.986169   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:28.986176   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:28.986237   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:29.025528   64758 cri.go:89] found id: ""
	I0804 00:17:29.025556   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.025564   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:29.025570   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:29.025624   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:29.059525   64758 cri.go:89] found id: ""
	I0804 00:17:29.059553   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.059561   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:29.059566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:29.059614   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:29.097451   64758 cri.go:89] found id: ""
	I0804 00:17:29.097489   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.097499   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:29.097506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:29.097564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:29.135504   64758 cri.go:89] found id: ""
	I0804 00:17:29.135532   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.135540   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:29.135546   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:29.135601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:29.175277   64758 cri.go:89] found id: ""
	I0804 00:17:29.175314   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.175324   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:29.175332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:29.175391   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:29.210275   64758 cri.go:89] found id: ""
	I0804 00:17:29.210303   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.210314   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:29.210321   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:29.210382   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:29.246138   64758 cri.go:89] found id: ""
	I0804 00:17:29.246174   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.246186   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:29.246196   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:29.246213   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:29.298935   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:29.298971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:29.313342   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:29.313388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:29.384609   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:29.384635   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:29.384650   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:29.461759   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:29.461795   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:27.611021   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:29.612149   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:27.831367   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:30.327878   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.328772   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.101073   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.600832   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.010152   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:32.023609   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:32.023677   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:32.062480   64758 cri.go:89] found id: ""
	I0804 00:17:32.062508   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.062517   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:32.062523   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:32.062590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:32.099601   64758 cri.go:89] found id: ""
	I0804 00:17:32.099627   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.099634   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:32.099640   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:32.099691   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:32.138651   64758 cri.go:89] found id: ""
	I0804 00:17:32.138680   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.138689   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:32.138694   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:32.138751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:32.182224   64758 cri.go:89] found id: ""
	I0804 00:17:32.182249   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.182257   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:32.182264   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:32.182318   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:32.224381   64758 cri.go:89] found id: ""
	I0804 00:17:32.224410   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.224421   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:32.224429   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:32.224486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:32.261569   64758 cri.go:89] found id: ""
	I0804 00:17:32.261600   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.261609   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:32.261615   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:32.261663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:32.304769   64758 cri.go:89] found id: ""
	I0804 00:17:32.304793   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.304807   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:32.304814   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:32.304867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:32.348695   64758 cri.go:89] found id: ""
	I0804 00:17:32.348727   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.348736   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:32.348745   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:32.348757   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.389444   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:32.389473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:32.442901   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:32.442938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:32.457562   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:32.457588   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:32.529121   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:32.529144   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:32.529160   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.114712   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:35.129725   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:35.129795   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:35.167226   64758 cri.go:89] found id: ""
	I0804 00:17:35.167248   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.167257   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:35.167262   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:35.167310   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:35.200889   64758 cri.go:89] found id: ""
	I0804 00:17:35.200914   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.200922   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:35.200927   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:35.201000   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:35.234899   64758 cri.go:89] found id: ""
	I0804 00:17:35.234927   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.234938   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:35.234945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:35.235003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:35.271355   64758 cri.go:89] found id: ""
	I0804 00:17:35.271393   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.271405   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:35.271412   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:35.271471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:35.313557   64758 cri.go:89] found id: ""
	I0804 00:17:35.313585   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.313595   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:35.313602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:35.313663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:35.352931   64758 cri.go:89] found id: ""
	I0804 00:17:35.352960   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.352971   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:35.352979   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:35.353046   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:35.391202   64758 cri.go:89] found id: ""
	I0804 00:17:35.391232   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.391256   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:35.391263   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:35.391337   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:35.427599   64758 cri.go:89] found id: ""
	I0804 00:17:35.427627   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.427638   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:35.427649   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:35.427666   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:35.482025   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:35.482061   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:35.498274   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:35.498303   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:35.572606   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:35.572631   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:35.572644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.655534   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:35.655566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.114835   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.610785   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.827077   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:36.827108   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:36.601588   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:38.602210   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:40.602295   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:38.205756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:38.218974   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:38.219044   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:38.253798   64758 cri.go:89] found id: ""
	I0804 00:17:38.253827   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.253839   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:38.253852   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:38.253911   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:38.291074   64758 cri.go:89] found id: ""
	I0804 00:17:38.291102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.291113   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:38.291120   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:38.291182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:38.332097   64758 cri.go:89] found id: ""
	I0804 00:17:38.332123   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.332133   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:38.332140   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:38.332198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:38.370074   64758 cri.go:89] found id: ""
	I0804 00:17:38.370102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.370110   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:38.370117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:38.370176   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:38.406962   64758 cri.go:89] found id: ""
	I0804 00:17:38.406984   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.406993   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:38.406998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:38.407051   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:38.447532   64758 cri.go:89] found id: ""
	I0804 00:17:38.447562   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.447572   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:38.447579   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:38.447653   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:38.484326   64758 cri.go:89] found id: ""
	I0804 00:17:38.484356   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.484368   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:38.484375   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:38.484444   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:38.521831   64758 cri.go:89] found id: ""
	I0804 00:17:38.521858   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.521869   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:38.521880   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:38.521893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:38.570540   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:38.570569   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:38.624921   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:38.624953   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:38.639451   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:38.639477   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:38.714435   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:38.714459   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:38.714475   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:41.295160   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:41.310032   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:41.310108   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:41.350363   64758 cri.go:89] found id: ""
	I0804 00:17:41.350393   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.350404   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:41.350412   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:41.350475   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:41.391662   64758 cri.go:89] found id: ""
	I0804 00:17:41.391691   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.391698   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:41.391703   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:41.391760   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:41.429653   64758 cri.go:89] found id: ""
	I0804 00:17:41.429678   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.429686   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:41.429692   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:41.429739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:41.469456   64758 cri.go:89] found id: ""
	I0804 00:17:41.469483   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.469494   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:41.469505   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:41.469566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:41.506124   64758 cri.go:89] found id: ""
	I0804 00:17:41.506154   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.506164   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:41.506171   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:41.506234   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:41.543139   64758 cri.go:89] found id: ""
	I0804 00:17:41.543171   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.543182   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:41.543190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:41.543252   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:41.580537   64758 cri.go:89] found id: ""
	I0804 00:17:41.580568   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.580578   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:41.580585   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:41.580652   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:41.619828   64758 cri.go:89] found id: ""
	I0804 00:17:41.619854   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.619862   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:41.619869   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:41.619882   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:41.660749   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:41.660780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:41.712889   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:41.712924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:41.726422   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:41.726447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:41.805673   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:41.805697   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:41.805712   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:37.110193   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:39.110927   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:41.111203   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:39.327800   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:41.327910   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:43.099815   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:45.101262   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:44.386563   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:44.399891   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:44.399954   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:44.434270   64758 cri.go:89] found id: ""
	I0804 00:17:44.434297   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.434305   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:44.434311   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:44.434372   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:44.469423   64758 cri.go:89] found id: ""
	I0804 00:17:44.469454   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.469463   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:44.469468   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:44.469535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:44.505511   64758 cri.go:89] found id: ""
	I0804 00:17:44.505539   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.505547   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:44.505553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:44.505602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:44.540897   64758 cri.go:89] found id: ""
	I0804 00:17:44.540922   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.540932   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:44.540937   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:44.540996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:44.578722   64758 cri.go:89] found id: ""
	I0804 00:17:44.578747   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.578755   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:44.578760   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:44.578812   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:44.615838   64758 cri.go:89] found id: ""
	I0804 00:17:44.615863   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.615874   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:44.615881   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:44.615940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:44.657695   64758 cri.go:89] found id: ""
	I0804 00:17:44.657724   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.657734   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:44.657741   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:44.657916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:44.695852   64758 cri.go:89] found id: ""
	I0804 00:17:44.695882   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.695892   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:44.695901   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:44.695912   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:44.754643   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:44.754687   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:44.773964   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:44.773994   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:44.857544   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:44.857567   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:44.857583   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:44.952987   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:44.953027   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:43.610772   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:45.611480   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:43.827218   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:46.327323   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:47.600755   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.099574   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:47.504957   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:47.520153   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:47.520232   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:47.557303   64758 cri.go:89] found id: ""
	I0804 00:17:47.557326   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.557334   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:47.557339   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:47.557410   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:47.595626   64758 cri.go:89] found id: ""
	I0804 00:17:47.595655   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.595665   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:47.595675   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:47.595733   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:47.633430   64758 cri.go:89] found id: ""
	I0804 00:17:47.633458   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.633466   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:47.633472   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:47.633525   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:47.670116   64758 cri.go:89] found id: ""
	I0804 00:17:47.670140   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.670149   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:47.670154   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:47.670200   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:47.709019   64758 cri.go:89] found id: ""
	I0804 00:17:47.709042   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.709050   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:47.709055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:47.709111   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:47.745230   64758 cri.go:89] found id: ""
	I0804 00:17:47.745251   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.745259   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:47.745265   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:47.745319   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:47.787957   64758 cri.go:89] found id: ""
	I0804 00:17:47.787985   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.787996   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:47.788004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:47.788063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:47.821451   64758 cri.go:89] found id: ""
	I0804 00:17:47.821477   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.821488   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:47.821498   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:47.821516   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:47.903035   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:47.903139   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:47.903162   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:47.986659   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:47.986702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:48.037921   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:48.037951   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:48.095354   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:48.095389   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:50.613264   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:50.627717   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:50.627792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:50.669311   64758 cri.go:89] found id: ""
	I0804 00:17:50.669338   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.669347   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:50.669370   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:50.669438   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:50.714674   64758 cri.go:89] found id: ""
	I0804 00:17:50.714704   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.714713   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:50.714718   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:50.714769   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:50.755291   64758 cri.go:89] found id: ""
	I0804 00:17:50.755318   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.755326   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:50.755332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:50.755394   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:50.801927   64758 cri.go:89] found id: ""
	I0804 00:17:50.801955   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.801964   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:50.801970   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:50.802020   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:50.845096   64758 cri.go:89] found id: ""
	I0804 00:17:50.845121   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.845130   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:50.845136   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:50.845193   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:50.882664   64758 cri.go:89] found id: ""
	I0804 00:17:50.882694   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.882705   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:50.882712   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:50.882771   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:50.921233   64758 cri.go:89] found id: ""
	I0804 00:17:50.921260   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.921268   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:50.921273   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:50.921326   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:50.955254   64758 cri.go:89] found id: ""
	I0804 00:17:50.955286   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.955298   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:50.955311   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:50.955329   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:51.010001   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:51.010037   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:51.024943   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:51.024966   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:51.096095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:51.096123   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:51.096139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:51.177829   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:51.177864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:47.611778   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.110408   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:48.328693   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.828022   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:52.609609   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:55.100616   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:53.720665   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:53.736318   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:53.736380   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:53.772887   64758 cri.go:89] found id: ""
	I0804 00:17:53.772916   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.772926   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:53.772934   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:53.772995   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:53.811771   64758 cri.go:89] found id: ""
	I0804 00:17:53.811797   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.811837   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:53.811845   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:53.811906   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:53.846684   64758 cri.go:89] found id: ""
	I0804 00:17:53.846716   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.846726   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:53.846736   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:53.846798   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:53.883550   64758 cri.go:89] found id: ""
	I0804 00:17:53.883581   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.883592   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:53.883600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:53.883662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:53.921031   64758 cri.go:89] found id: ""
	I0804 00:17:53.921061   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.921072   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:53.921080   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:53.921153   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:53.960338   64758 cri.go:89] found id: ""
	I0804 00:17:53.960364   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.960374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:53.960381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:53.960441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:53.998404   64758 cri.go:89] found id: ""
	I0804 00:17:53.998434   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.998450   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:53.998458   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:53.998520   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:54.033417   64758 cri.go:89] found id: ""
	I0804 00:17:54.033444   64758 logs.go:276] 0 containers: []
	W0804 00:17:54.033453   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:54.033461   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:54.033473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:54.071945   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:54.071971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:54.124614   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:54.124644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:54.140757   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:54.140783   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:54.241735   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:54.241754   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:54.241769   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:56.821591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:56.836569   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:56.836631   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:56.872013   64758 cri.go:89] found id: ""
	I0804 00:17:56.872039   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.872048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:56.872054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:56.872110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:52.612077   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:55.111566   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:52.828335   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:54.830625   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:56.831382   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:57.101663   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.600253   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:56.908022   64758 cri.go:89] found id: ""
	I0804 00:17:56.908051   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.908061   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:56.908067   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:56.908114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:56.943309   64758 cri.go:89] found id: ""
	I0804 00:17:56.943336   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.943347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:56.943359   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:56.943415   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:56.977799   64758 cri.go:89] found id: ""
	I0804 00:17:56.977839   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.977847   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:56.977853   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:56.977916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:57.015185   64758 cri.go:89] found id: ""
	I0804 00:17:57.015213   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.015223   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:57.015237   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:57.015295   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:57.051856   64758 cri.go:89] found id: ""
	I0804 00:17:57.051879   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.051887   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:57.051893   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:57.051944   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:57.086349   64758 cri.go:89] found id: ""
	I0804 00:17:57.086376   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.086387   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:57.086393   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:57.086439   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:57.125005   64758 cri.go:89] found id: ""
	I0804 00:17:57.125048   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.125064   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:57.125076   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:57.125090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:57.200348   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:57.200382   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:57.240899   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:57.240924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.294331   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:57.294375   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:57.308388   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:57.308429   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:57.382602   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:59.883070   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:59.897055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:59.897116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:59.932983   64758 cri.go:89] found id: ""
	I0804 00:17:59.933012   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.933021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:59.933029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:59.933088   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:59.971781   64758 cri.go:89] found id: ""
	I0804 00:17:59.971807   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.971815   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:59.971820   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:59.971878   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:00.008381   64758 cri.go:89] found id: ""
	I0804 00:18:00.008406   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.008414   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:00.008419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:00.008483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:00.053257   64758 cri.go:89] found id: ""
	I0804 00:18:00.053281   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.053290   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:00.053295   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:00.053342   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:00.089891   64758 cri.go:89] found id: ""
	I0804 00:18:00.089925   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.089936   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:00.089943   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:00.090008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:00.129833   64758 cri.go:89] found id: ""
	I0804 00:18:00.129863   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.129875   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:00.129884   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:00.129942   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:00.181324   64758 cri.go:89] found id: ""
	I0804 00:18:00.181390   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.181403   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:00.181410   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:00.181471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:00.224426   64758 cri.go:89] found id: ""
	I0804 00:18:00.224451   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.224459   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:00.224467   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:00.224481   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:00.240122   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:00.240155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:00.317324   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:00.317346   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:00.317379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:00.398917   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:00.398952   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:00.440730   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:00.440758   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.111741   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.611509   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.327597   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:01.328678   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:02.099384   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:04.100512   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:02.992128   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:03.006787   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:03.006870   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:03.041291   64758 cri.go:89] found id: ""
	I0804 00:18:03.041321   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.041332   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:03.041341   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:03.041427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:03.077822   64758 cri.go:89] found id: ""
	I0804 00:18:03.077851   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.077863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:03.077871   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:03.077928   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:03.116579   64758 cri.go:89] found id: ""
	I0804 00:18:03.116603   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.116611   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:03.116616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:03.116662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:03.154904   64758 cri.go:89] found id: ""
	I0804 00:18:03.154931   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.154942   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:03.154950   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:03.155018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:03.190300   64758 cri.go:89] found id: ""
	I0804 00:18:03.190328   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.190341   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:03.190349   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:03.190413   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:03.225975   64758 cri.go:89] found id: ""
	I0804 00:18:03.226004   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.226016   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:03.226023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:03.226087   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:03.271499   64758 cri.go:89] found id: ""
	I0804 00:18:03.271525   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.271535   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:03.271543   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:03.271602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:03.308643   64758 cri.go:89] found id: ""
	I0804 00:18:03.308668   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.308675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:03.308684   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:03.308698   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:03.324528   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:03.324562   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:03.401102   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:03.401125   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:03.401139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:03.481817   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:03.481864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:03.522568   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:03.522601   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.074678   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:06.089765   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:06.089844   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:06.128372   64758 cri.go:89] found id: ""
	I0804 00:18:06.128400   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.128411   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:06.128419   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:06.128467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:06.169488   64758 cri.go:89] found id: ""
	I0804 00:18:06.169515   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.169525   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:06.169532   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:06.169590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:06.207969   64758 cri.go:89] found id: ""
	I0804 00:18:06.207998   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.208009   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:06.208015   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:06.208067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:06.244497   64758 cri.go:89] found id: ""
	I0804 00:18:06.244521   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.244529   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:06.244535   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:06.244592   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:06.282905   64758 cri.go:89] found id: ""
	I0804 00:18:06.282935   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.282945   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:06.282952   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:06.283013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:06.322498   64758 cri.go:89] found id: ""
	I0804 00:18:06.322523   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.322530   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:06.322537   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:06.322583   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:06.361377   64758 cri.go:89] found id: ""
	I0804 00:18:06.361402   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.361412   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:06.361420   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:06.361485   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:06.402082   64758 cri.go:89] found id: ""
	I0804 00:18:06.402112   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.402120   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:06.402128   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:06.402141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.452052   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:06.452089   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:06.466695   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:06.466734   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:06.546115   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:06.546140   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:06.546155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:06.639671   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:06.639708   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:02.111360   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:04.610774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:06.612557   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:03.330392   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:05.828925   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:06.603713   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:09.100060   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:09.193473   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:09.207696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:09.207755   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:09.247757   64758 cri.go:89] found id: ""
	I0804 00:18:09.247784   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.247795   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:09.247802   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:09.247867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:09.285516   64758 cri.go:89] found id: ""
	I0804 00:18:09.285549   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.285559   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:09.285567   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:09.285628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:09.321689   64758 cri.go:89] found id: ""
	I0804 00:18:09.321715   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.321725   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:09.321732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:09.321789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:09.358135   64758 cri.go:89] found id: ""
	I0804 00:18:09.358158   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.358166   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:09.358176   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:09.358223   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:09.393642   64758 cri.go:89] found id: ""
	I0804 00:18:09.393667   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.393675   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:09.393681   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:09.393730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:09.430651   64758 cri.go:89] found id: ""
	I0804 00:18:09.430674   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.430683   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:09.430689   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:09.430734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:09.472433   64758 cri.go:89] found id: ""
	I0804 00:18:09.472460   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.472469   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:09.472474   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:09.472533   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:09.511147   64758 cri.go:89] found id: ""
	I0804 00:18:09.511171   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.511179   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:09.511187   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:09.511207   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:09.560099   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:09.560142   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:09.574609   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:09.574641   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:09.646863   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:09.646891   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:09.646906   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:09.727309   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:09.727352   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:09.111726   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:11.611445   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:08.329278   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:10.827361   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:11.600326   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:14.099811   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:12.268925   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:12.284737   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:12.284813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:12.326015   64758 cri.go:89] found id: ""
	I0804 00:18:12.326036   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.326044   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:12.326049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:12.326095   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:12.374096   64758 cri.go:89] found id: ""
	I0804 00:18:12.374129   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.374138   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:12.374143   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:12.374235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:12.426467   64758 cri.go:89] found id: ""
	I0804 00:18:12.426493   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.426502   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:12.426509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:12.426570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:12.485034   64758 cri.go:89] found id: ""
	I0804 00:18:12.485060   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.485072   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:12.485079   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:12.485138   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:12.523490   64758 cri.go:89] found id: ""
	I0804 00:18:12.523517   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.523525   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:12.523530   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:12.523577   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:12.563318   64758 cri.go:89] found id: ""
	I0804 00:18:12.563347   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.563358   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:12.563365   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:12.563425   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:12.600455   64758 cri.go:89] found id: ""
	I0804 00:18:12.600482   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.600492   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:12.600499   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:12.600566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:12.641146   64758 cri.go:89] found id: ""
	I0804 00:18:12.641170   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.641178   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:12.641186   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:12.641197   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:12.697240   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:12.697274   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:12.711399   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:12.711432   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:12.794022   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:12.794050   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:12.794067   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:12.881327   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:12.881379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:15.425765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:15.439338   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:15.439420   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:15.477964   64758 cri.go:89] found id: ""
	I0804 00:18:15.477991   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.478002   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:15.478009   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:15.478069   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:15.514554   64758 cri.go:89] found id: ""
	I0804 00:18:15.514574   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.514583   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:15.514588   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:15.514636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:15.549702   64758 cri.go:89] found id: ""
	I0804 00:18:15.549732   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.549741   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:15.549747   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:15.549813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:15.584619   64758 cri.go:89] found id: ""
	I0804 00:18:15.584663   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.584675   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:15.584683   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:15.584746   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:15.625084   64758 cri.go:89] found id: ""
	I0804 00:18:15.625111   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.625121   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:15.625128   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:15.625192   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:15.666629   64758 cri.go:89] found id: ""
	I0804 00:18:15.666655   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.666664   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:15.666673   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:15.666726   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:15.704287   64758 cri.go:89] found id: ""
	I0804 00:18:15.704316   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.704324   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:15.704330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:15.704383   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:15.740629   64758 cri.go:89] found id: ""
	I0804 00:18:15.740659   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.740668   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:15.740678   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:15.740702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:15.794093   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:15.794124   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:15.807629   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:15.807659   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:15.887638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:15.887665   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:15.887680   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:15.972935   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:15.972978   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:13.611758   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:15.613472   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:13.327640   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:15.827432   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:16.100599   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:18.603192   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:18.518022   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:18.532360   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:18.532433   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:18.565519   64758 cri.go:89] found id: ""
	I0804 00:18:18.565544   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.565552   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:18.565557   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:18.565612   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:18.599939   64758 cri.go:89] found id: ""
	I0804 00:18:18.599967   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.599978   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:18.599985   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:18.600055   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:18.639035   64758 cri.go:89] found id: ""
	I0804 00:18:18.639062   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.639070   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:18.639076   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:18.639124   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:18.677188   64758 cri.go:89] found id: ""
	I0804 00:18:18.677210   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.677218   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:18.677223   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:18.677268   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:18.715892   64758 cri.go:89] found id: ""
	I0804 00:18:18.715921   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.715932   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:18.715940   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:18.716005   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:18.752274   64758 cri.go:89] found id: ""
	I0804 00:18:18.752298   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.752307   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:18.752313   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:18.752368   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:18.795251   64758 cri.go:89] found id: ""
	I0804 00:18:18.795279   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.795288   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:18.795293   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:18.795353   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.830842   64758 cri.go:89] found id: ""
	I0804 00:18:18.830866   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.830874   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:18.830882   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:18.830893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:18.883687   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:18.883719   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:18.898406   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:18.898433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:18.973191   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:18.973215   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:18.973231   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:19.054094   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:19.054141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:21.597245   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:21.612534   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:21.612605   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:21.649391   64758 cri.go:89] found id: ""
	I0804 00:18:21.649415   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.649426   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:21.649434   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:21.649492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:21.683202   64758 cri.go:89] found id: ""
	I0804 00:18:21.683226   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.683233   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:21.683244   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:21.683300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:21.717450   64758 cri.go:89] found id: ""
	I0804 00:18:21.717475   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.717484   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:21.717489   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:21.717547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:21.752559   64758 cri.go:89] found id: ""
	I0804 00:18:21.752588   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.752596   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:21.752602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:21.752650   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:21.788336   64758 cri.go:89] found id: ""
	I0804 00:18:21.788364   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.788375   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:21.788381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:21.788428   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:21.829404   64758 cri.go:89] found id: ""
	I0804 00:18:21.829428   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.829436   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:21.829443   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:21.829502   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:21.869473   64758 cri.go:89] found id: ""
	I0804 00:18:21.869504   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.869515   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:21.869521   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:21.869587   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.111377   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:20.610253   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:17.827889   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:20.327830   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:21.100486   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:23.599788   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:25.601620   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:21.909883   64758 cri.go:89] found id: ""
	I0804 00:18:21.909907   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.909915   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:21.909923   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:21.909940   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:21.925038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:21.925071   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:22.000261   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.000281   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:22.000294   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:22.082813   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:22.082846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:22.126741   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:22.126774   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:24.677246   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:24.692404   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:24.692467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:24.739001   64758 cri.go:89] found id: ""
	I0804 00:18:24.739039   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.739049   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:24.739054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:24.739119   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:24.779558   64758 cri.go:89] found id: ""
	I0804 00:18:24.779586   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.779597   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:24.779605   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:24.779664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:24.819257   64758 cri.go:89] found id: ""
	I0804 00:18:24.819284   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.819295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:24.819301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:24.819363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:24.862504   64758 cri.go:89] found id: ""
	I0804 00:18:24.862531   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.862539   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:24.862544   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:24.862599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:24.899605   64758 cri.go:89] found id: ""
	I0804 00:18:24.899637   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.899649   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:24.899656   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:24.899716   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:24.936575   64758 cri.go:89] found id: ""
	I0804 00:18:24.936604   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.936612   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:24.936618   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:24.936667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:24.971736   64758 cri.go:89] found id: ""
	I0804 00:18:24.971764   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.971775   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:24.971782   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:24.971851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:25.010214   64758 cri.go:89] found id: ""
	I0804 00:18:25.010244   64758 logs.go:276] 0 containers: []
	W0804 00:18:25.010253   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:25.010265   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:25.010279   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:25.091145   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:25.091186   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:25.137574   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:25.137603   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:25.189559   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:25.189593   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:25.204725   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:25.204763   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:25.278903   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.612077   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:25.111666   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:22.827542   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:24.829587   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:27.326922   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:28.100576   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:30.603955   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:27.779500   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:27.793548   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:27.793628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:27.830811   64758 cri.go:89] found id: ""
	I0804 00:18:27.830844   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.830854   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:27.830862   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:27.830919   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:27.869966   64758 cri.go:89] found id: ""
	I0804 00:18:27.869991   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.869998   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:27.870004   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:27.870062   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:27.909474   64758 cri.go:89] found id: ""
	I0804 00:18:27.909496   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.909504   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:27.909509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:27.909567   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:27.948588   64758 cri.go:89] found id: ""
	I0804 00:18:27.948613   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.948625   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:27.948632   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:27.948704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:27.991957   64758 cri.go:89] found id: ""
	I0804 00:18:27.991979   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.991987   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:27.991993   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:27.992052   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:28.029516   64758 cri.go:89] found id: ""
	I0804 00:18:28.029544   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.029555   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:28.029562   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:28.029627   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:28.067851   64758 cri.go:89] found id: ""
	I0804 00:18:28.067879   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.067891   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:28.067898   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:28.067957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:28.107488   64758 cri.go:89] found id: ""
	I0804 00:18:28.107514   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.107524   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:28.107534   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:28.107548   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:28.158490   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:28.158523   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:28.172000   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:28.172030   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:28.247803   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:28.247823   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:28.247839   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:28.326695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:28.326727   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:30.867241   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:30.881074   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:30.881146   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:30.919078   64758 cri.go:89] found id: ""
	I0804 00:18:30.919105   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.919115   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:30.919122   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:30.919184   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:30.954436   64758 cri.go:89] found id: ""
	I0804 00:18:30.954463   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.954474   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:30.954481   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:30.954546   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:30.993080   64758 cri.go:89] found id: ""
	I0804 00:18:30.993110   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.993121   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:30.993129   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:30.993188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:31.031465   64758 cri.go:89] found id: ""
	I0804 00:18:31.031493   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.031504   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:31.031512   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:31.031570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:31.067374   64758 cri.go:89] found id: ""
	I0804 00:18:31.067405   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.067416   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:31.067423   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:31.067493   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:31.104021   64758 cri.go:89] found id: ""
	I0804 00:18:31.104048   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.104059   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:31.104066   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:31.104128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:31.146995   64758 cri.go:89] found id: ""
	I0804 00:18:31.147023   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.147033   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:31.147040   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:31.147106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:31.184708   64758 cri.go:89] found id: ""
	I0804 00:18:31.184739   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.184749   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:31.184760   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:31.184776   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:31.237743   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:31.237781   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:31.252038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:31.252070   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:31.326357   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:31.326380   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:31.326401   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:31.408212   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:31.408256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:27.610666   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:29.610899   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:31.611472   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:29.827980   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:32.326666   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:33.099814   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:35.100740   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:33.954396   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:33.968311   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:33.968384   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:34.006574   64758 cri.go:89] found id: ""
	I0804 00:18:34.006605   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.006625   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:34.006635   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:34.006698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:34.042400   64758 cri.go:89] found id: ""
	I0804 00:18:34.042427   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.042435   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:34.042441   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:34.042492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:34.080769   64758 cri.go:89] found id: ""
	I0804 00:18:34.080793   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.080804   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:34.080810   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:34.080877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:34.118283   64758 cri.go:89] found id: ""
	I0804 00:18:34.118311   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.118320   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:34.118326   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:34.118377   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:34.153679   64758 cri.go:89] found id: ""
	I0804 00:18:34.153708   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.153719   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:34.153727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:34.153780   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:34.189618   64758 cri.go:89] found id: ""
	I0804 00:18:34.189674   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.189686   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:34.189696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:34.189770   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:34.224628   64758 cri.go:89] found id: ""
	I0804 00:18:34.224666   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.224677   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:34.224684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:34.224744   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:34.265343   64758 cri.go:89] found id: ""
	I0804 00:18:34.265389   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.265399   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:34.265409   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:34.265428   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:34.337992   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:34.338014   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:34.338025   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:34.420224   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:34.420263   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:34.462009   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:34.462042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:34.520087   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:34.520120   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:34.111351   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:36.112271   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:34.328807   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:36.827190   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:37.599447   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:40.099291   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:37.035398   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:37.048955   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:37.049024   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:37.087433   64758 cri.go:89] found id: ""
	I0804 00:18:37.087460   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.087470   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:37.087478   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:37.087542   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:37.128227   64758 cri.go:89] found id: ""
	I0804 00:18:37.128255   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.128267   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:37.128275   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:37.128328   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:37.165371   64758 cri.go:89] found id: ""
	I0804 00:18:37.165405   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.165415   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:37.165424   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:37.165486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:37.201168   64758 cri.go:89] found id: ""
	I0804 00:18:37.201198   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.201209   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:37.201217   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:37.201278   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:37.237378   64758 cri.go:89] found id: ""
	I0804 00:18:37.237406   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.237414   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:37.237419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:37.237465   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:37.273425   64758 cri.go:89] found id: ""
	I0804 00:18:37.273456   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.273467   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:37.273475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:37.273547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:37.313019   64758 cri.go:89] found id: ""
	I0804 00:18:37.313048   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.313056   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:37.313061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:37.313116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:37.354741   64758 cri.go:89] found id: ""
	I0804 00:18:37.354771   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.354779   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:37.354788   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:37.354800   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:37.408703   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:37.408740   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:37.423393   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:37.423419   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:37.497460   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:37.497487   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:37.497501   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:37.579811   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:37.579856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:40.122872   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:40.139106   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:40.139177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:40.178571   64758 cri.go:89] found id: ""
	I0804 00:18:40.178599   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.178610   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:40.178617   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:40.178679   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:40.215680   64758 cri.go:89] found id: ""
	I0804 00:18:40.215714   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.215722   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:40.215728   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:40.215776   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:40.250618   64758 cri.go:89] found id: ""
	I0804 00:18:40.250647   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.250658   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:40.250666   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:40.250729   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:40.289195   64758 cri.go:89] found id: ""
	I0804 00:18:40.289223   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.289233   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:40.289240   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:40.289296   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:40.330961   64758 cri.go:89] found id: ""
	I0804 00:18:40.330988   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.330998   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:40.331006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:40.331056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:40.376435   64758 cri.go:89] found id: ""
	I0804 00:18:40.376465   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.376478   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:40.376487   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:40.376558   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:40.416415   64758 cri.go:89] found id: ""
	I0804 00:18:40.416447   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.416459   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:40.416465   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:40.416535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:40.452958   64758 cri.go:89] found id: ""
	I0804 00:18:40.452996   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.453007   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:40.453018   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:40.453036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:40.503775   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:40.503808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:40.517825   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:40.517855   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:40.587818   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:40.587847   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:40.587861   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:40.674139   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:40.674183   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:38.611068   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:40.611923   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:39.326489   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:41.327327   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:42.100795   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:44.602441   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:43.217266   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:43.232190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:43.232262   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:43.270127   64758 cri.go:89] found id: ""
	I0804 00:18:43.270156   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.270163   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:43.270169   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:43.270219   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:43.309401   64758 cri.go:89] found id: ""
	I0804 00:18:43.309429   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.309439   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:43.309446   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:43.309503   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:43.347210   64758 cri.go:89] found id: ""
	I0804 00:18:43.347235   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.347242   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:43.347247   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:43.347300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:43.382548   64758 cri.go:89] found id: ""
	I0804 00:18:43.382578   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.382588   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:43.382595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:43.382658   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:43.422076   64758 cri.go:89] found id: ""
	I0804 00:18:43.422102   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.422113   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:43.422121   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:43.422168   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:43.458525   64758 cri.go:89] found id: ""
	I0804 00:18:43.458552   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.458560   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:43.458566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:43.458623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:43.498134   64758 cri.go:89] found id: ""
	I0804 00:18:43.498157   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.498165   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:43.498170   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:43.498217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:43.543289   64758 cri.go:89] found id: ""
	I0804 00:18:43.543312   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.543320   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:43.543328   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:43.543338   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:43.593489   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:43.593521   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:43.607838   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:43.607869   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:43.682791   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:43.682813   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:43.682826   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:43.761695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:43.761737   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.305385   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:46.320003   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:46.320063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:46.367941   64758 cri.go:89] found id: ""
	I0804 00:18:46.367969   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.367980   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:46.367986   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:46.368058   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:46.422540   64758 cri.go:89] found id: ""
	I0804 00:18:46.422563   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.422572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:46.422578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:46.422637   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:46.470192   64758 cri.go:89] found id: ""
	I0804 00:18:46.470238   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.470248   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:46.470257   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:46.470316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:46.512375   64758 cri.go:89] found id: ""
	I0804 00:18:46.512399   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.512408   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:46.512413   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:46.512471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:46.546547   64758 cri.go:89] found id: ""
	I0804 00:18:46.546580   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.546592   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:46.546600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:46.546665   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:46.583598   64758 cri.go:89] found id: ""
	I0804 00:18:46.583621   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.583630   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:46.583636   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:46.583692   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:46.621066   64758 cri.go:89] found id: ""
	I0804 00:18:46.621101   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.621116   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:46.621122   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:46.621177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:46.654115   64758 cri.go:89] found id: ""
	I0804 00:18:46.654149   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.654162   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:46.654174   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:46.654191   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:46.738542   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:46.738582   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.778894   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:46.778923   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:46.833225   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:46.833257   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:46.847222   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:46.847247   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:18:42.612522   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:45.110927   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:43.327420   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:45.327936   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:47.328380   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:46.604576   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.100232   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	W0804 00:18:46.922590   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.423639   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:49.437417   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:49.437490   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:49.474889   64758 cri.go:89] found id: ""
	I0804 00:18:49.474914   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.474923   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:49.474929   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:49.474986   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:49.512860   64758 cri.go:89] found id: ""
	I0804 00:18:49.512889   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.512900   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:49.512908   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:49.512965   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:49.550558   64758 cri.go:89] found id: ""
	I0804 00:18:49.550594   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.550603   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:49.550611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:49.550671   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:49.587779   64758 cri.go:89] found id: ""
	I0804 00:18:49.587810   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.587823   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:49.587831   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:49.587890   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:49.630307   64758 cri.go:89] found id: ""
	I0804 00:18:49.630333   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.630344   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:49.630352   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:49.630411   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:49.665013   64758 cri.go:89] found id: ""
	I0804 00:18:49.665046   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.665057   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:49.665064   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:49.665127   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:49.701375   64758 cri.go:89] found id: ""
	I0804 00:18:49.701401   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.701410   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:49.701415   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:49.701472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:49.737237   64758 cri.go:89] found id: ""
	I0804 00:18:49.737261   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.737269   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:49.737278   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:49.737291   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:49.790998   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:49.791033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:49.804933   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:49.804965   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:49.877997   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.878019   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:49.878035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:49.963836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:49.963872   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:47.611774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.612581   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.616560   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.827900   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.829950   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.599613   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:53.600496   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:52.506621   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:52.521482   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:52.521553   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:52.555980   64758 cri.go:89] found id: ""
	I0804 00:18:52.556010   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.556021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:52.556029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:52.556094   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:52.593088   64758 cri.go:89] found id: ""
	I0804 00:18:52.593119   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.593130   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:52.593138   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:52.593197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:52.632058   64758 cri.go:89] found id: ""
	I0804 00:18:52.632088   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.632107   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:52.632115   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:52.632177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:52.668701   64758 cri.go:89] found id: ""
	I0804 00:18:52.668730   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.668739   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:52.668745   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:52.668814   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:52.705041   64758 cri.go:89] found id: ""
	I0804 00:18:52.705068   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.705075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:52.705089   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:52.705149   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:52.743304   64758 cri.go:89] found id: ""
	I0804 00:18:52.743327   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.743335   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:52.743340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:52.743397   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:52.781020   64758 cri.go:89] found id: ""
	I0804 00:18:52.781050   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.781060   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:52.781073   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:52.781134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:52.820979   64758 cri.go:89] found id: ""
	I0804 00:18:52.821004   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.821014   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:52.821024   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:52.821042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:52.876450   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:52.876488   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:52.890529   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:52.890566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:52.960682   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:52.960710   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:52.960725   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:53.044000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:53.044040   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:55.601594   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:55.615574   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:55.615645   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:55.655116   64758 cri.go:89] found id: ""
	I0804 00:18:55.655146   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.655157   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:55.655164   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:55.655217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:55.695809   64758 cri.go:89] found id: ""
	I0804 00:18:55.695837   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.695846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:55.695851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:55.695909   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:55.732784   64758 cri.go:89] found id: ""
	I0804 00:18:55.732811   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.732822   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:55.732828   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:55.732920   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:55.773316   64758 cri.go:89] found id: ""
	I0804 00:18:55.773338   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.773347   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:55.773368   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:55.773416   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:55.808886   64758 cri.go:89] found id: ""
	I0804 00:18:55.808913   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.808924   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:55.808931   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:55.808990   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:55.848471   64758 cri.go:89] found id: ""
	I0804 00:18:55.848499   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.848507   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:55.848513   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:55.848568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:55.884088   64758 cri.go:89] found id: ""
	I0804 00:18:55.884117   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.884128   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:55.884134   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:55.884194   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:55.918194   64758 cri.go:89] found id: ""
	I0804 00:18:55.918222   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.918233   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:55.918243   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:55.918264   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:55.932685   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:55.932717   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:56.003817   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:56.003840   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:56.003856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:56.087804   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:56.087846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:56.129959   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:56.129993   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:54.111584   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.610664   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:54.327283   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.328332   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.100620   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.601669   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:00.604763   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.685077   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:58.698624   64758 kubeadm.go:597] duration metric: took 4m4.179874556s to restartPrimaryControlPlane
	W0804 00:18:58.698704   64758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:18:58.698731   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:18:58.611004   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:00.611252   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.828188   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:01.329218   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.100214   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:05.101275   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.967117   64758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.268366381s)
	I0804 00:19:03.967202   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:19:03.982098   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:19:03.991994   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:19:04.002780   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:19:04.002802   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:19:04.002845   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:19:04.012216   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:19:04.012279   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:19:04.021463   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:19:04.030689   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:19:04.030743   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:19:04.040801   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.050496   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:19:04.050558   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.060782   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:19:04.071595   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:19:04.071673   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:19:04.082123   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:19:04.313172   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:19:02.611712   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:05.111575   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.827427   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:06.327317   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:07.599775   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:09.599814   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:07.611608   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:10.110194   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:08.333681   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:10.829626   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:11.601081   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:14.099098   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:12.110388   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:14.111401   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:16.610774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:13.327035   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:15.327695   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:17.327749   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:16.100543   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:18.602723   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:20.603470   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:18.611336   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:21.111798   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:19.329120   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:21.826869   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:22.605600   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:25.101500   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:23.610581   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:25.610814   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:24.326982   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:26.827772   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:27.599557   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:29.600283   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:28.110748   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:30.111027   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:29.327031   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:31.328581   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:32.101571   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:34.601251   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:32.610784   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:34.612611   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:33.828237   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:35.828319   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:37.099717   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:39.100492   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:37.111009   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:39.610805   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:38.326730   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:40.327548   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:42.330066   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:41.600239   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:43.600686   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:45.601464   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:42.110900   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:44.610221   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:45.605124   65087 pod_ready.go:81] duration metric: took 4m0.000843677s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" ...
	E0804 00:19:45.605152   65087 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0804 00:19:45.605175   65087 pod_ready.go:38] duration metric: took 4m13.615224515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:19:45.605208   65087 kubeadm.go:597] duration metric: took 4m21.736484018s to restartPrimaryControlPlane
	W0804 00:19:45.605273   65087 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:19:45.605304   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:19:44.827547   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:47.329541   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:48.101237   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:50.603754   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:49.826561   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:51.828643   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:53.100714   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:55.102037   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:53.832996   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:54.830906   65441 pod_ready.go:81] duration metric: took 4m0.010324747s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	E0804 00:19:54.830936   65441 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0804 00:19:54.830947   65441 pod_ready.go:38] duration metric: took 4m4.842701336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:19:54.830968   65441 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:19:54.831003   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:19:54.831070   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:19:54.887772   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:54.887804   65441 cri.go:89] found id: ""
	I0804 00:19:54.887815   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:19:54.887877   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:54.892740   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:19:54.892801   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:19:54.943044   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:54.943082   65441 cri.go:89] found id: ""
	I0804 00:19:54.943092   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:19:54.943164   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:54.947699   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:19:54.947765   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:19:54.997280   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:54.997302   65441 cri.go:89] found id: ""
	I0804 00:19:54.997311   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:19:54.997380   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.005574   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:19:55.005642   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:19:55.066824   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:55.066845   65441 cri.go:89] found id: ""
	I0804 00:19:55.066852   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:19:55.066906   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.071713   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:19:55.071779   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:19:55.116381   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:55.116406   65441 cri.go:89] found id: ""
	I0804 00:19:55.116414   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:19:55.116468   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.121174   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:19:55.121237   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:19:55.168300   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:55.168323   65441 cri.go:89] found id: ""
	I0804 00:19:55.168331   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:19:55.168381   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.173450   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:19:55.173509   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:19:55.218999   65441 cri.go:89] found id: ""
	I0804 00:19:55.219030   65441 logs.go:276] 0 containers: []
	W0804 00:19:55.219041   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:19:55.219048   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:19:55.219115   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:19:55.263696   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:55.263723   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:55.263727   65441 cri.go:89] found id: ""
	I0804 00:19:55.263734   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:19:55.263789   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.269001   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.277864   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:19:55.277899   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:55.323692   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:19:55.323729   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:55.364971   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:19:55.365005   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:19:55.871942   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:19:55.871983   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:19:55.929828   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:19:55.929869   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:19:55.987389   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:19:55.987425   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:56.041330   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:19:56.041381   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:56.082524   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:19:56.082556   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:56.122545   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:19:56.122572   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:56.178249   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:19:56.178288   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:56.219273   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:19:56.219300   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:19:56.235345   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:19:56.235389   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:19:56.370660   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:19:56.370692   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:57.600248   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:00.100920   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:58.936934   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:19:58.953624   65441 api_server.go:72] duration metric: took 4m14.22488371s to wait for apiserver process to appear ...
	I0804 00:19:58.953655   65441 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:19:58.953700   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:19:58.953764   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:19:58.997408   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:58.997434   65441 cri.go:89] found id: ""
	I0804 00:19:58.997443   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:19:58.997492   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.004398   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:19:59.004466   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:19:59.041483   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:59.041510   65441 cri.go:89] found id: ""
	I0804 00:19:59.041518   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:19:59.041568   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.045754   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:19:59.045825   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:19:59.081738   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:59.081756   65441 cri.go:89] found id: ""
	I0804 00:19:59.081764   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:19:59.081809   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.086297   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:19:59.086348   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:19:59.124421   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:59.124440   65441 cri.go:89] found id: ""
	I0804 00:19:59.124447   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:19:59.124494   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.128612   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:19:59.128677   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:19:59.165702   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:59.165728   65441 cri.go:89] found id: ""
	I0804 00:19:59.165737   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:19:59.165791   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.170016   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:19:59.170103   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:19:59.205275   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:59.205299   65441 cri.go:89] found id: ""
	I0804 00:19:59.205307   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:19:59.205377   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.209637   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:19:59.209699   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:19:59.244254   65441 cri.go:89] found id: ""
	I0804 00:19:59.244281   65441 logs.go:276] 0 containers: []
	W0804 00:19:59.244290   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:19:59.244295   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:19:59.244343   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:19:59.281850   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:59.281876   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:59.281880   65441 cri.go:89] found id: ""
	I0804 00:19:59.281887   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:19:59.281935   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.286423   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.291108   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:19:59.291134   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:59.340778   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:19:59.340808   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:59.379258   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:19:59.379288   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:59.418902   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:19:59.418932   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:19:59.875668   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:19:59.875708   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:19:59.932947   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:19:59.932980   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:59.980190   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:19:59.980224   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:00.024331   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:20:00.024359   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:00.064676   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:20:00.064701   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:00.117684   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:20:00.117717   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:00.153654   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:20:00.153683   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:00.200840   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:00.200869   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:00.214380   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:00.214410   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:02.101240   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:04.600064   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:02.832546   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:20:02.837684   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 200:
	ok
	I0804 00:20:02.838736   65441 api_server.go:141] control plane version: v1.30.3
	I0804 00:20:02.838763   65441 api_server.go:131] duration metric: took 3.885096913s to wait for apiserver health ...
	I0804 00:20:02.838773   65441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:02.838798   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:02.838856   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:02.878530   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:20:02.878556   65441 cri.go:89] found id: ""
	I0804 00:20:02.878565   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:20:02.878628   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.883263   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:02.883338   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:02.921989   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:20:02.922009   65441 cri.go:89] found id: ""
	I0804 00:20:02.922017   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:20:02.922062   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.928690   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:02.928767   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:02.967469   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:20:02.967490   65441 cri.go:89] found id: ""
	I0804 00:20:02.967498   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:20:02.967544   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.972155   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:02.972217   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:03.011875   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:03.011900   65441 cri.go:89] found id: ""
	I0804 00:20:03.011910   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:20:03.011966   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.016326   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:03.016395   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:03.057114   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:03.057137   65441 cri.go:89] found id: ""
	I0804 00:20:03.057145   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:20:03.057206   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.061528   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:03.061592   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:03.101778   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:03.101832   65441 cri.go:89] found id: ""
	I0804 00:20:03.101842   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:20:03.101902   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.106292   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:03.106368   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:03.146453   65441 cri.go:89] found id: ""
	I0804 00:20:03.146484   65441 logs.go:276] 0 containers: []
	W0804 00:20:03.146496   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:03.146504   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:03.146569   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:03.185861   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:03.185884   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:20:03.185887   65441 cri.go:89] found id: ""
	I0804 00:20:03.185894   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:20:03.185941   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.190490   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.194727   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:03.194750   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:03.308015   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:20:03.308052   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:20:03.358699   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:20:03.358732   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:20:03.410398   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:20:03.410430   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:20:03.450651   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:03.450685   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:03.859092   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:20:03.859145   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:03.905500   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:20:03.905529   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:03.951014   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:03.951047   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:04.003275   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:04.003311   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:04.017574   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:20:04.017608   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:20:04.054252   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:20:04.054283   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:04.094524   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:20:04.094558   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:04.131163   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:20:04.131192   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:06.691154   65441 system_pods.go:59] 8 kube-system pods found
	I0804 00:20:06.691193   65441 system_pods.go:61] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running
	I0804 00:20:06.691199   65441 system_pods.go:61] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running
	I0804 00:20:06.691203   65441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running
	I0804 00:20:06.691209   65441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running
	I0804 00:20:06.691213   65441 system_pods.go:61] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:20:06.691218   65441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running
	I0804 00:20:06.691226   65441 system_pods.go:61] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:06.691232   65441 system_pods.go:61] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:20:06.691244   65441 system_pods.go:74] duration metric: took 3.852463199s to wait for pod list to return data ...
	I0804 00:20:06.691257   65441 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:06.693724   65441 default_sa.go:45] found service account: "default"
	I0804 00:20:06.693755   65441 default_sa.go:55] duration metric: took 2.486182ms for default service account to be created ...
	I0804 00:20:06.693767   65441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:06.698925   65441 system_pods.go:86] 8 kube-system pods found
	I0804 00:20:06.698950   65441 system_pods.go:89] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running
	I0804 00:20:06.698956   65441 system_pods.go:89] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running
	I0804 00:20:06.698962   65441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running
	I0804 00:20:06.698968   65441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running
	I0804 00:20:06.698972   65441 system_pods.go:89] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:20:06.698976   65441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running
	I0804 00:20:06.698983   65441 system_pods.go:89] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:06.698990   65441 system_pods.go:89] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:20:06.698997   65441 system_pods.go:126] duration metric: took 5.224971ms to wait for k8s-apps to be running ...
	I0804 00:20:06.699003   65441 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:06.699047   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:06.714188   65441 system_svc.go:56] duration metric: took 15.17801ms WaitForService to wait for kubelet
	I0804 00:20:06.714213   65441 kubeadm.go:582] duration metric: took 4m21.985480612s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:06.714232   65441 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:06.716717   65441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:06.716743   65441 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:06.716757   65441 node_conditions.go:105] duration metric: took 2.521245ms to run NodePressure ...
	I0804 00:20:06.716771   65441 start.go:241] waiting for startup goroutines ...
	I0804 00:20:06.716780   65441 start.go:246] waiting for cluster config update ...
	I0804 00:20:06.716796   65441 start.go:255] writing updated cluster config ...
	I0804 00:20:06.717156   65441 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:06.765983   65441 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:20:06.768482   65441 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-969068" cluster and "default" namespace by default
	I0804 00:20:06.600233   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:08.603829   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:11.852948   65087 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.247618249s)
	I0804 00:20:11.853025   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:11.870882   65087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:20:11.882005   65087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:20:11.892505   65087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:20:11.892527   65087 kubeadm.go:157] found existing configuration files:
	
	I0804 00:20:11.892570   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:20:11.902005   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:20:11.902061   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:20:11.911585   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:20:11.921837   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:20:11.921911   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:20:11.101091   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:13.607073   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:14.600605   64502 pod_ready.go:81] duration metric: took 4m0.007136508s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	E0804 00:20:14.600629   64502 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0804 00:20:14.600637   64502 pod_ready.go:38] duration metric: took 4m5.120472791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:14.600651   64502 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:20:14.600675   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:14.600717   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:14.669699   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:14.669724   64502 cri.go:89] found id: ""
	I0804 00:20:14.669733   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:14.669789   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.674907   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:14.674978   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:14.720830   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:14.720867   64502 cri.go:89] found id: ""
	I0804 00:20:14.720877   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:14.720937   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.726667   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:14.726729   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:14.778216   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:14.778247   64502 cri.go:89] found id: ""
	I0804 00:20:14.778256   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:14.778321   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.785349   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:14.785433   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:14.836381   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:14.836408   64502 cri.go:89] found id: ""
	I0804 00:20:14.836416   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:14.836475   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.841662   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:14.841752   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:14.884803   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:14.884827   64502 cri.go:89] found id: ""
	I0804 00:20:14.884836   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:14.884904   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.890625   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:14.890696   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:14.942713   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:14.942732   64502 cri.go:89] found id: ""
	I0804 00:20:14.942739   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:14.942800   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.948335   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:14.948391   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:14.994869   64502 cri.go:89] found id: ""
	I0804 00:20:14.994900   64502 logs.go:276] 0 containers: []
	W0804 00:20:14.994910   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:14.994917   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:14.994985   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:15.034528   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:15.034557   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:15.034564   64502 cri.go:89] found id: ""
	I0804 00:20:15.034574   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:15.034633   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:15.039335   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:15.044600   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:15.044625   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:15.091365   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:15.091398   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:15.144896   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:15.144924   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:15.675849   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:15.675901   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:15.691640   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:15.691699   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:11.931844   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:20:11.941369   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:20:11.941430   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:20:11.951279   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:20:11.961201   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:20:11.961275   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:20:11.972150   65087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:20:12.024567   65087 kubeadm.go:310] W0804 00:20:12.001791    2996 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0804 00:20:12.025287   65087 kubeadm.go:310] W0804 00:20:12.002530    2996 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0804 00:20:12.154034   65087 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:20:20.358593   65087 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0804 00:20:20.358649   65087 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:20:20.358721   65087 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:20:20.358834   65087 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:20:20.358953   65087 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 00:20:20.359013   65087 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:20:20.360498   65087 out.go:204]   - Generating certificates and keys ...
	I0804 00:20:20.360590   65087 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:20:20.360692   65087 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:20:20.360767   65087 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:20:20.360821   65087 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:20:20.360915   65087 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:20:20.360971   65087 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:20:20.361042   65087 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:20:20.361124   65087 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:20:20.361228   65087 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:20:20.361307   65087 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:20:20.361342   65087 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:20:20.361436   65087 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:20:20.361523   65087 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:20:20.361592   65087 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 00:20:20.361642   65087 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:20:20.361698   65087 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:20:20.361746   65087 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:20:20.361815   65087 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:20:20.361881   65087 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:20:20.363214   65087 out.go:204]   - Booting up control plane ...
	I0804 00:20:20.363312   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:20:20.363381   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:20:20.363450   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:20:20.363541   65087 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:20:20.363628   65087 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:20:20.363678   65087 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:20:20.363790   65087 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 00:20:20.363889   65087 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 00:20:20.363960   65087 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.009132208s
	I0804 00:20:20.364044   65087 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 00:20:20.364094   65087 kubeadm.go:310] [api-check] The API server is healthy after 4.501833932s
	I0804 00:20:20.364201   65087 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 00:20:20.364321   65087 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 00:20:20.364397   65087 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 00:20:20.364585   65087 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-118016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 00:20:20.364634   65087 kubeadm.go:310] [bootstrap-token] Using token: bbnfwa.jorg7huedw5cbtk2
	I0804 00:20:20.366569   65087 out.go:204]   - Configuring RBAC rules ...
	I0804 00:20:20.366705   65087 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 00:20:20.366823   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 00:20:20.366979   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 00:20:20.367160   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 00:20:20.367275   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 00:20:20.367352   65087 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 00:20:20.367447   65087 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 00:20:20.367510   65087 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 00:20:20.367574   65087 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 00:20:20.367580   65087 kubeadm.go:310] 
	I0804 00:20:20.367629   65087 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 00:20:20.367635   65087 kubeadm.go:310] 
	I0804 00:20:20.367697   65087 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 00:20:20.367703   65087 kubeadm.go:310] 
	I0804 00:20:20.367724   65087 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 00:20:20.367784   65087 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 00:20:20.367828   65087 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 00:20:20.367834   65087 kubeadm.go:310] 
	I0804 00:20:20.367886   65087 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 00:20:20.367903   65087 kubeadm.go:310] 
	I0804 00:20:20.367971   65087 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 00:20:20.367981   65087 kubeadm.go:310] 
	I0804 00:20:20.368050   65087 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 00:20:20.368125   65087 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 00:20:20.368187   65087 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 00:20:20.368193   65087 kubeadm.go:310] 
	I0804 00:20:20.368262   65087 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 00:20:20.368353   65087 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 00:20:20.368367   65087 kubeadm.go:310] 
	I0804 00:20:20.368480   65087 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bbnfwa.jorg7huedw5cbtk2 \
	I0804 00:20:20.368588   65087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0804 00:20:20.368614   65087 kubeadm.go:310] 	--control-plane 
	I0804 00:20:20.368621   65087 kubeadm.go:310] 
	I0804 00:20:20.368705   65087 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 00:20:20.368712   65087 kubeadm.go:310] 
	I0804 00:20:20.368810   65087 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bbnfwa.jorg7huedw5cbtk2 \
	I0804 00:20:20.368933   65087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0804 00:20:20.368947   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:20:20.368957   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:20:20.370303   65087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:20:15.859131   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:15.859169   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:15.917686   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:15.917726   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:15.964285   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:15.964328   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:16.019646   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:16.019679   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:16.069379   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:16.069416   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:16.129752   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:16.129842   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:16.177015   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:16.177043   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:16.220526   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:16.220560   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:18.771509   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:20:18.793252   64502 api_server.go:72] duration metric: took 4m15.042389156s to wait for apiserver process to appear ...
	I0804 00:20:18.793291   64502 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:20:18.793334   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:18.793415   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:18.839339   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:18.839363   64502 cri.go:89] found id: ""
	I0804 00:20:18.839372   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:18.839432   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.843932   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:18.844005   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:18.894398   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:18.894422   64502 cri.go:89] found id: ""
	I0804 00:20:18.894432   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:18.894491   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.899596   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:18.899664   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:18.947077   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:18.947106   64502 cri.go:89] found id: ""
	I0804 00:20:18.947114   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:18.947168   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.952349   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:18.952431   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:18.999336   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:18.999361   64502 cri.go:89] found id: ""
	I0804 00:20:18.999377   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:18.999441   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.005419   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:19.005502   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:19.061388   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:19.061413   64502 cri.go:89] found id: ""
	I0804 00:20:19.061422   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:19.061476   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.066071   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:19.066139   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:19.111849   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:19.111872   64502 cri.go:89] found id: ""
	I0804 00:20:19.111879   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:19.111929   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.116272   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:19.116323   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:19.157387   64502 cri.go:89] found id: ""
	I0804 00:20:19.157414   64502 logs.go:276] 0 containers: []
	W0804 00:20:19.157423   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:19.157431   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:19.157493   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:19.199627   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:19.199654   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:19.199660   64502 cri.go:89] found id: ""
	I0804 00:20:19.199669   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:19.199727   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.204317   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.208707   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:19.208729   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:19.261684   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:19.261717   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:19.309861   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:19.309890   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:19.349376   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:19.349403   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:19.388561   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:19.388590   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:19.466119   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:19.466163   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:19.515539   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:19.515575   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:19.561529   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:19.561556   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:19.626188   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:19.626219   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:19.640348   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:19.640372   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:19.772397   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:19.772439   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:19.827217   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:19.827260   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:20.306543   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:20.306589   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:20.371388   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:20:20.384738   65087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:20:20.404547   65087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:20:20.404607   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:20.404659   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-118016 minikube.k8s.io/updated_at=2024_08_04T00_20_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=no-preload-118016 minikube.k8s.io/primary=true
	I0804 00:20:20.602476   65087 ops.go:34] apiserver oom_adj: -16
	I0804 00:20:20.602551   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:21.103011   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:21.602888   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:22.102779   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:22.603282   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:23.103337   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:23.603522   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.103510   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.603474   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.689895   65087 kubeadm.go:1113] duration metric: took 4.285337247s to wait for elevateKubeSystemPrivileges
	I0804 00:20:24.689931   65087 kubeadm.go:394] duration metric: took 5m0.881315877s to StartCluster
	I0804 00:20:24.689947   65087 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:24.690018   65087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:20:24.691559   65087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:24.691784   65087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:20:24.691848   65087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:20:24.691963   65087 addons.go:69] Setting storage-provisioner=true in profile "no-preload-118016"
	I0804 00:20:24.691977   65087 addons.go:69] Setting default-storageclass=true in profile "no-preload-118016"
	I0804 00:20:24.691999   65087 addons.go:234] Setting addon storage-provisioner=true in "no-preload-118016"
	I0804 00:20:24.692001   65087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-118016"
	I0804 00:20:24.692001   65087 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:20:24.692018   65087 addons.go:69] Setting metrics-server=true in profile "no-preload-118016"
	W0804 00:20:24.692007   65087 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:20:24.692068   65087 addons.go:234] Setting addon metrics-server=true in "no-preload-118016"
	I0804 00:20:24.692086   65087 host.go:66] Checking if "no-preload-118016" exists ...
	W0804 00:20:24.692099   65087 addons.go:243] addon metrics-server should already be in state true
	I0804 00:20:24.692142   65087 host.go:66] Checking if "no-preload-118016" exists ...
	I0804 00:20:24.692440   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692464   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.692494   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692441   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692517   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.692566   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.693590   65087 out.go:177] * Verifying Kubernetes components...
	I0804 00:20:24.695139   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:20:24.708841   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0804 00:20:24.709324   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.709911   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.709937   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.710305   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.710594   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.712827   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0804 00:20:24.712894   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0804 00:20:24.713349   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.713375   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.713884   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.713899   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.713923   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.713942   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.714211   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.714264   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.714421   65087 addons.go:234] Setting addon default-storageclass=true in "no-preload-118016"
	W0804 00:20:24.714440   65087 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:20:24.714468   65087 host.go:66] Checking if "no-preload-118016" exists ...
	I0804 00:20:24.714605   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.714623   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.714801   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.714846   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.714993   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.715014   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.730476   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0804 00:20:24.730811   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36995
	I0804 00:20:24.730912   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.731145   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.731470   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.731486   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.731733   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.731748   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.731808   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.732034   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.732123   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.732294   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.733677   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0804 00:20:24.734185   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.734257   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.734306   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.734689   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.734709   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.735090   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.735618   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.735643   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.736977   65087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:20:24.736979   65087 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:20:22.853589   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:20:22.859439   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0804 00:20:22.860503   64502 api_server.go:141] control plane version: v1.30.3
	I0804 00:20:22.860521   64502 api_server.go:131] duration metric: took 4.067223392s to wait for apiserver health ...
	I0804 00:20:22.860528   64502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:22.860550   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:22.860598   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:22.901174   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:22.901193   64502 cri.go:89] found id: ""
	I0804 00:20:22.901200   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:22.901246   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.905319   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:22.905404   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:22.948354   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:22.948378   64502 cri.go:89] found id: ""
	I0804 00:20:22.948387   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:22.948438   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.952776   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:22.952863   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:22.989339   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:22.989376   64502 cri.go:89] found id: ""
	I0804 00:20:22.989385   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:22.989443   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.993833   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:22.993909   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:23.035367   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:23.035385   64502 cri.go:89] found id: ""
	I0804 00:20:23.035392   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:23.035434   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.040184   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:23.040259   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:23.078508   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:23.078529   64502 cri.go:89] found id: ""
	I0804 00:20:23.078538   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:23.078601   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.082907   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:23.082969   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:23.120846   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:23.120870   64502 cri.go:89] found id: ""
	I0804 00:20:23.120880   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:23.120943   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.125641   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:23.125702   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:23.172188   64502 cri.go:89] found id: ""
	I0804 00:20:23.172212   64502 logs.go:276] 0 containers: []
	W0804 00:20:23.172224   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:23.172232   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:23.172297   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:23.218188   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:23.218207   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:23.218211   64502 cri.go:89] found id: ""
	I0804 00:20:23.218217   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:23.218268   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.222562   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.226965   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:23.226989   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:23.269384   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:23.269414   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:23.309148   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:23.309178   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:23.356908   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:23.356936   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:23.395352   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:23.395381   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:23.450901   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:23.450925   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:23.488908   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:23.488945   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:23.551780   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:23.551808   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:23.975030   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:23.975070   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:24.035464   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:24.035497   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:24.053118   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:24.053148   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:24.197157   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:24.197189   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:24.254049   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:24.254083   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:24.738735   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:20:24.738757   65087 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:20:24.738785   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.738836   65087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:20:24.738847   65087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:20:24.738860   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.742131   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742459   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742539   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.742569   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742690   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.742968   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.743009   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.743254   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.743142   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.743174   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.743387   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.743447   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.743590   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.743720   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.754983   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0804 00:20:24.755436   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.755877   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.755901   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.756229   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.756454   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.758285   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.758520   65087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:20:24.758537   65087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:20:24.758555   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.761268   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.761715   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.761739   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.762001   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.762211   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.762402   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.762593   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.942323   65087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:20:24.971293   65087 node_ready.go:35] waiting up to 6m0s for node "no-preload-118016" to be "Ready" ...
	I0804 00:20:24.991406   65087 node_ready.go:49] node "no-preload-118016" has status "Ready":"True"
	I0804 00:20:24.991428   65087 node_ready.go:38] duration metric: took 20.101499ms for node "no-preload-118016" to be "Ready" ...
	I0804 00:20:24.991436   65087 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:25.004484   65087 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:25.069407   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:20:25.069437   65087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:20:25.093645   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:20:25.178590   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:20:25.178615   65087 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:20:25.246634   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:20:25.272880   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:20:25.272916   65087 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:20:25.368517   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:20:25.442382   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.442406   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.442668   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:25.442711   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.442717   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.442726   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.442732   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.444425   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.444456   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.444463   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:25.451275   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.451298   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.451605   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.451620   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.451617   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218075   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.218105   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.218391   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.218416   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.218427   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.218435   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.218440   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218702   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218764   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.218786   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.671629   65087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.303057537s)
	I0804 00:20:26.671683   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.671702   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.672010   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.672031   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.672041   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.672049   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.672327   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.672363   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.672378   65087 addons.go:475] Verifying addon metrics-server=true in "no-preload-118016"
	I0804 00:20:26.674374   65087 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0804 00:20:26.803868   64502 system_pods.go:59] 8 kube-system pods found
	I0804 00:20:26.803909   64502 system_pods.go:61] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running
	I0804 00:20:26.803917   64502 system_pods.go:61] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running
	I0804 00:20:26.803923   64502 system_pods.go:61] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running
	I0804 00:20:26.803928   64502 system_pods.go:61] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running
	I0804 00:20:26.803934   64502 system_pods.go:61] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:20:26.803940   64502 system_pods.go:61] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running
	I0804 00:20:26.803948   64502 system_pods.go:61] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:26.803957   64502 system_pods.go:61] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:20:26.803966   64502 system_pods.go:74] duration metric: took 3.943432992s to wait for pod list to return data ...
	I0804 00:20:26.803978   64502 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:26.808760   64502 default_sa.go:45] found service account: "default"
	I0804 00:20:26.808786   64502 default_sa.go:55] duration metric: took 4.797226ms for default service account to be created ...
	I0804 00:20:26.808796   64502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:26.814721   64502 system_pods.go:86] 8 kube-system pods found
	I0804 00:20:26.814753   64502 system_pods.go:89] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running
	I0804 00:20:26.814761   64502 system_pods.go:89] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running
	I0804 00:20:26.814768   64502 system_pods.go:89] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running
	I0804 00:20:26.814774   64502 system_pods.go:89] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running
	I0804 00:20:26.814780   64502 system_pods.go:89] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:20:26.814787   64502 system_pods.go:89] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running
	I0804 00:20:26.814798   64502 system_pods.go:89] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:26.814807   64502 system_pods.go:89] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:20:26.814819   64502 system_pods.go:126] duration metric: took 6.01558ms to wait for k8s-apps to be running ...
	I0804 00:20:26.814828   64502 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:26.814894   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:26.837462   64502 system_svc.go:56] duration metric: took 22.624089ms WaitForService to wait for kubelet
	I0804 00:20:26.837494   64502 kubeadm.go:582] duration metric: took 4m23.086636256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:26.837522   64502 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:26.841517   64502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:26.841548   64502 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:26.841563   64502 node_conditions.go:105] duration metric: took 4.034693ms to run NodePressure ...
	I0804 00:20:26.841576   64502 start.go:241] waiting for startup goroutines ...
	I0804 00:20:26.841586   64502 start.go:246] waiting for cluster config update ...
	I0804 00:20:26.841600   64502 start.go:255] writing updated cluster config ...
	I0804 00:20:26.841939   64502 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:26.908142   64502 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:20:26.910191   64502 out.go:177] * Done! kubectl is now configured to use "embed-certs-877598" cluster and "default" namespace by default
	I0804 00:20:26.675679   65087 addons.go:510] duration metric: took 1.98382947s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0804 00:20:27.012226   65087 pod_ready.go:102] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:29.511886   65087 pod_ready.go:102] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:32.011000   65087 pod_ready.go:92] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:32.011021   65087 pod_ready.go:81] duration metric: took 7.006508451s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:32.011031   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.518235   65087 pod_ready.go:92] pod "kube-apiserver-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.518260   65087 pod_ready.go:81] duration metric: took 1.507219524s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.518270   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.522894   65087 pod_ready.go:92] pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.522916   65087 pod_ready.go:81] duration metric: took 4.639763ms for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.522928   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4jqng" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.527271   65087 pod_ready.go:92] pod "kube-proxy-4jqng" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.527291   65087 pod_ready.go:81] duration metric: took 4.353851ms for pod "kube-proxy-4jqng" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.527303   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.531405   65087 pod_ready.go:92] pod "kube-scheduler-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.531424   65087 pod_ready.go:81] duration metric: took 4.113418ms for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.531433   65087 pod_ready.go:38] duration metric: took 8.539987559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:33.531449   65087 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:20:33.531505   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:20:33.546783   65087 api_server.go:72] duration metric: took 8.854972636s to wait for apiserver process to appear ...
	I0804 00:20:33.546813   65087 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:20:33.546832   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:20:33.551131   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0804 00:20:33.552092   65087 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:20:33.552112   65087 api_server.go:131] duration metric: took 5.292367ms to wait for apiserver health ...
	I0804 00:20:33.552119   65087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:33.557965   65087 system_pods.go:59] 9 kube-system pods found
	I0804 00:20:33.557987   65087 system_pods.go:61] "coredns-6f6b679f8f-gg97s" [28bfbbe9-5051-4674-8b43-f07bfdbc6916] Running
	I0804 00:20:33.557995   65087 system_pods.go:61] "coredns-6f6b679f8f-lj494" [74baae1c-e4c4-4125-aa9d-aeaac74a6ecd] Running
	I0804 00:20:33.558000   65087 system_pods.go:61] "etcd-no-preload-118016" [19ff6386-b0c0-41f7-89fa-fd62e8698b05] Running
	I0804 00:20:33.558005   65087 system_pods.go:61] "kube-apiserver-no-preload-118016" [d791bfcb-00d1-47b8-a9c2-ac8e68af4062] Running
	I0804 00:20:33.558009   65087 system_pods.go:61] "kube-controller-manager-no-preload-118016" [cef9e6fa-7a9d-4d84-8693-216d2eeab428] Running
	I0804 00:20:33.558014   65087 system_pods.go:61] "kube-proxy-4jqng" [c254599f-e58d-4d0a-81c9-1c98c0341f26] Running
	I0804 00:20:33.558018   65087 system_pods.go:61] "kube-scheduler-no-preload-118016" [0deea66f-2336-4371-9492-5af84f3f0fe8] Running
	I0804 00:20:33.558026   65087 system_pods.go:61] "metrics-server-6867b74b74-9gw27" [2f3cdf21-9e68-49b9-a6e0-927465738f23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:33.558035   65087 system_pods.go:61] "storage-provisioner" [07fdb5fa-a2e9-4d3d-8149-25720c320d51] Running
	I0804 00:20:33.558045   65087 system_pods.go:74] duration metric: took 5.921154ms to wait for pod list to return data ...
	I0804 00:20:33.558057   65087 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:33.608139   65087 default_sa.go:45] found service account: "default"
	I0804 00:20:33.608164   65087 default_sa.go:55] duration metric: took 50.097877ms for default service account to be created ...
	I0804 00:20:33.608174   65087 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:33.811878   65087 system_pods.go:86] 9 kube-system pods found
	I0804 00:20:33.811906   65087 system_pods.go:89] "coredns-6f6b679f8f-gg97s" [28bfbbe9-5051-4674-8b43-f07bfdbc6916] Running
	I0804 00:20:33.811912   65087 system_pods.go:89] "coredns-6f6b679f8f-lj494" [74baae1c-e4c4-4125-aa9d-aeaac74a6ecd] Running
	I0804 00:20:33.811916   65087 system_pods.go:89] "etcd-no-preload-118016" [19ff6386-b0c0-41f7-89fa-fd62e8698b05] Running
	I0804 00:20:33.811920   65087 system_pods.go:89] "kube-apiserver-no-preload-118016" [d791bfcb-00d1-47b8-a9c2-ac8e68af4062] Running
	I0804 00:20:33.811925   65087 system_pods.go:89] "kube-controller-manager-no-preload-118016" [cef9e6fa-7a9d-4d84-8693-216d2eeab428] Running
	I0804 00:20:33.811928   65087 system_pods.go:89] "kube-proxy-4jqng" [c254599f-e58d-4d0a-81c9-1c98c0341f26] Running
	I0804 00:20:33.811932   65087 system_pods.go:89] "kube-scheduler-no-preload-118016" [0deea66f-2336-4371-9492-5af84f3f0fe8] Running
	I0804 00:20:33.811939   65087 system_pods.go:89] "metrics-server-6867b74b74-9gw27" [2f3cdf21-9e68-49b9-a6e0-927465738f23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:33.811943   65087 system_pods.go:89] "storage-provisioner" [07fdb5fa-a2e9-4d3d-8149-25720c320d51] Running
	I0804 00:20:33.811950   65087 system_pods.go:126] duration metric: took 203.770479ms to wait for k8s-apps to be running ...
	I0804 00:20:33.811957   65087 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:33.812000   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:33.827146   65087 system_svc.go:56] duration metric: took 15.17867ms WaitForService to wait for kubelet
	I0804 00:20:33.827176   65087 kubeadm.go:582] duration metric: took 9.135367695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:33.827199   65087 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:34.009032   65087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:34.009056   65087 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:34.009076   65087 node_conditions.go:105] duration metric: took 181.872031ms to run NodePressure ...
	I0804 00:20:34.009086   65087 start.go:241] waiting for startup goroutines ...
	I0804 00:20:34.009112   65087 start.go:246] waiting for cluster config update ...
	I0804 00:20:34.009128   65087 start.go:255] writing updated cluster config ...
	I0804 00:20:34.009423   65087 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:34.059796   65087 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0804 00:20:34.061903   65087 out.go:177] * Done! kubectl is now configured to use "no-preload-118016" cluster and "default" namespace by default
	I0804 00:21:00.664979   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:21:00.665100   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:21:00.666810   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:00.666904   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:00.667020   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:00.667150   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:00.667278   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:00.667370   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:00.670254   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:00.670337   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:00.670431   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:00.670537   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:00.670623   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:00.670721   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:00.670788   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:00.670883   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:00.670990   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:00.671079   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:00.671168   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:00.671217   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:00.671265   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:00.671359   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:00.671442   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:00.671529   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:00.671611   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:00.671756   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:00.671856   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:00.671888   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:00.671940   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:00.673410   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:00.673506   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:00.673573   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:00.673627   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:00.673692   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:00.673828   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:00.673876   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:00.673972   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674207   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674283   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674517   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674590   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674752   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674851   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675053   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675173   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675451   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675463   64758 kubeadm.go:310] 
	I0804 00:21:00.675511   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:21:00.675561   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:21:00.675571   64758 kubeadm.go:310] 
	I0804 00:21:00.675614   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:21:00.675656   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:21:00.675787   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:21:00.675797   64758 kubeadm.go:310] 
	I0804 00:21:00.675928   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:21:00.675970   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:21:00.676009   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:21:00.676026   64758 kubeadm.go:310] 
	I0804 00:21:00.676172   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:21:00.676278   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:21:00.676289   64758 kubeadm.go:310] 
	I0804 00:21:00.676393   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:21:00.676466   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:21:00.676532   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:21:00.676609   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:21:00.676632   64758 kubeadm.go:310] 
	W0804 00:21:00.676723   64758 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 00:21:00.676765   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:21:01.138781   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:21:01.154405   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:21:01.164426   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:21:01.164445   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:21:01.164496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:21:01.173853   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:21:01.173907   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:21:01.183634   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:21:01.193283   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:21:01.193342   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:21:01.202427   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.212186   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:21:01.212235   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.222754   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:21:01.232996   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:21:01.233059   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:21:01.243778   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:21:01.319895   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:01.319975   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:01.474907   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:01.475029   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:01.475119   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:01.683624   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:01.685482   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:01.685584   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:01.685691   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:01.685792   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:01.685880   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:01.685991   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:01.686067   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:01.686147   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:01.686285   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:01.686399   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:01.686513   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:01.686600   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:01.686670   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:01.985613   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:02.088377   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:02.336621   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:02.448654   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:02.470140   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:02.471390   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:02.471456   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:02.610840   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:02.612641   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:02.612744   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:02.627044   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:02.629019   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:02.630430   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:02.633022   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:42.635581   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:42.635740   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:42.636036   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:47.636656   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:47.636879   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:57.637900   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:57.638098   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:17.638425   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:17.638634   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637807   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:57.637988   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637996   64758 kubeadm.go:310] 
	I0804 00:22:57.638035   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:22:57.638079   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:22:57.638085   64758 kubeadm.go:310] 
	I0804 00:22:57.638118   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:22:57.638148   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:22:57.638288   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:22:57.638309   64758 kubeadm.go:310] 
	I0804 00:22:57.638426   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:22:57.638507   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:22:57.638619   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:22:57.638640   64758 kubeadm.go:310] 
	I0804 00:22:57.638829   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:22:57.638944   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:22:57.638959   64758 kubeadm.go:310] 
	I0804 00:22:57.639107   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:22:57.639191   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:22:57.639300   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:22:57.639399   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:22:57.639412   64758 kubeadm.go:310] 
	I0804 00:22:57.639782   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:22:57.639904   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:22:57.640012   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:22:57.640091   64758 kubeadm.go:394] duration metric: took 8m3.172057529s to StartCluster
	I0804 00:22:57.640138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:22:57.640202   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:22:57.684020   64758 cri.go:89] found id: ""
	I0804 00:22:57.684054   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.684064   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:22:57.684072   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:22:57.684134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:22:57.722756   64758 cri.go:89] found id: ""
	I0804 00:22:57.722780   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.722788   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:22:57.722793   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:22:57.722851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:22:57.760371   64758 cri.go:89] found id: ""
	I0804 00:22:57.760400   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.760412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:22:57.760419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:22:57.760476   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:22:57.796114   64758 cri.go:89] found id: ""
	I0804 00:22:57.796144   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.796155   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:22:57.796162   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:22:57.796211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:22:57.842148   64758 cri.go:89] found id: ""
	I0804 00:22:57.842179   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.842191   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:22:57.842198   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:22:57.842286   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:22:57.914193   64758 cri.go:89] found id: ""
	I0804 00:22:57.914218   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.914229   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:22:57.914236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:22:57.914290   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:22:57.965944   64758 cri.go:89] found id: ""
	I0804 00:22:57.965973   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.965984   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:22:57.965991   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:22:57.966063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:22:58.003016   64758 cri.go:89] found id: ""
	I0804 00:22:58.003044   64758 logs.go:276] 0 containers: []
	W0804 00:22:58.003055   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:22:58.003066   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:22:58.003093   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:22:58.017277   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:22:58.017304   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:22:58.094192   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:22:58.094214   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:22:58.094227   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:22:58.210901   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:22:58.210944   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:22:58.249283   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:22:58.249317   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 00:22:58.300998   64758 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0804 00:22:58.301054   64758 out.go:239] * 
	W0804 00:22:58.301115   64758 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.301137   64758 out.go:239] * 
	W0804 00:22:58.301978   64758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:22:58.305305   64758 out.go:177] 
	W0804 00:22:58.306722   64758 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.306816   64758 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0804 00:22:58.306848   64758 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0804 00:22:58.308372   64758 out.go:177] 
	
	
	==> CRI-O <==
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.040325806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731369040301319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c523a1dc-9119-4efa-8015-56b5156f1d9b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.041106754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d1e10ca-114d-4fe9-8172-e4b35e7cbc32 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.041228058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d1e10ca-114d-4fe9-8172-e4b35e7cbc32 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.041424036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730590803109066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c57112216393de3cb0af5ba4d680f81582061ce6565f1fe2d6f785c1dfe08b6,PodSandboxId:7a1dd3f30cd5de949d224e9c84db6f4c3e6efd08138982ab8c1c7e7acd1621b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730570889029060,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6695481-0ca0-446c-b491-4547368cc051,},Annotations:map[string]string{io.kubernetes.container.hash: a09c37b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c,PodSandboxId:0540bff7659810128faae4cbbecdc9f03ae377a30328b75b9b66984076a0b82e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730567588074137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7gbcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf46b6f-da6d-4d8a-9b91-6c11f5225072,},Annotations:map[string]string{io.kubernetes.container.hash: 52a6f937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730560082175160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b,PodSandboxId:1d6379cc912f2b07244de2807b66cea9dd017b7b54194515b1a2e70c30a46ed2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730560028738545,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wk8zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2637a235-d0b5-46f3-bbad-ac7386ce6
1c7,},Annotations:map[string]string{io.kubernetes.container.hash: 52aa126e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc,PodSandboxId:e2fac095f10c13e8ee1fa8a05f391fb51935405646025421ca9ad88f05600679,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730555336246146,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 696fa13e27497b0cd143575077a4c241,},Annotations:map[string]string{io.kub
ernetes.container.hash: b0fd39c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163,PodSandboxId:f34e54e96c7547a5ca6ec74bc86f23d27376fc06bf38c9b3cdcaa1002e7e15df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730555268081158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d702c4a5848aa0880624d62984698a,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: c7c255f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac,PodSandboxId:26709f1531df569a335eec36b159f717f679f6b463fddf1010073c59da95e882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730555251303152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e9db6afa424e7201fe478e5d027be3a,},Annotations:map[string]string{io.kubernetes.container.hash:
7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12,PodSandboxId:8c55ed6a349659c4b2c6c01bdc56cdfb85e021dcc9262e5e372ac765152d6f82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730555227630918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf045931b294cba33c8aecb9fc5fc6c7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d1e10ca-114d-4fe9-8172-e4b35e7cbc32 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.083750803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3579ec39-3779-453a-8c8d-daf7bdfe9c69 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.083845487Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3579ec39-3779-453a-8c8d-daf7bdfe9c69 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.085154526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72cc41f8-6e66-421b-9f4f-7e292740b666 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.085771526Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731369085739667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72cc41f8-6e66-421b-9f4f-7e292740b666 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.086538762Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49fe62bc-5034-4b7c-a32d-d7bcde1d131c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.086687160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49fe62bc-5034-4b7c-a32d-d7bcde1d131c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.086951480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730590803109066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c57112216393de3cb0af5ba4d680f81582061ce6565f1fe2d6f785c1dfe08b6,PodSandboxId:7a1dd3f30cd5de949d224e9c84db6f4c3e6efd08138982ab8c1c7e7acd1621b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730570889029060,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6695481-0ca0-446c-b491-4547368cc051,},Annotations:map[string]string{io.kubernetes.container.hash: a09c37b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c,PodSandboxId:0540bff7659810128faae4cbbecdc9f03ae377a30328b75b9b66984076a0b82e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730567588074137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7gbcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf46b6f-da6d-4d8a-9b91-6c11f5225072,},Annotations:map[string]string{io.kubernetes.container.hash: 52a6f937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730560082175160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b,PodSandboxId:1d6379cc912f2b07244de2807b66cea9dd017b7b54194515b1a2e70c30a46ed2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730560028738545,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wk8zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2637a235-d0b5-46f3-bbad-ac7386ce6
1c7,},Annotations:map[string]string{io.kubernetes.container.hash: 52aa126e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc,PodSandboxId:e2fac095f10c13e8ee1fa8a05f391fb51935405646025421ca9ad88f05600679,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730555336246146,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 696fa13e27497b0cd143575077a4c241,},Annotations:map[string]string{io.kub
ernetes.container.hash: b0fd39c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163,PodSandboxId:f34e54e96c7547a5ca6ec74bc86f23d27376fc06bf38c9b3cdcaa1002e7e15df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730555268081158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d702c4a5848aa0880624d62984698a,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: c7c255f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac,PodSandboxId:26709f1531df569a335eec36b159f717f679f6b463fddf1010073c59da95e882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730555251303152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e9db6afa424e7201fe478e5d027be3a,},Annotations:map[string]string{io.kubernetes.container.hash:
7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12,PodSandboxId:8c55ed6a349659c4b2c6c01bdc56cdfb85e021dcc9262e5e372ac765152d6f82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730555227630918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf045931b294cba33c8aecb9fc5fc6c7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49fe62bc-5034-4b7c-a32d-d7bcde1d131c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.131476042Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf3d7c17-c675-4c13-b874-9a255d112439 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.131551476Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf3d7c17-c675-4c13-b874-9a255d112439 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.132979455Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6eccdd5-ac2c-4274-bb61-d69c28602e01 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.133477411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731369133454255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6eccdd5-ac2c-4274-bb61-d69c28602e01 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.134015917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84e1e122-90a3-438e-a41b-67b4a81212df name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.134070301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84e1e122-90a3-438e-a41b-67b4a81212df name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.134273346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730590803109066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c57112216393de3cb0af5ba4d680f81582061ce6565f1fe2d6f785c1dfe08b6,PodSandboxId:7a1dd3f30cd5de949d224e9c84db6f4c3e6efd08138982ab8c1c7e7acd1621b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730570889029060,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6695481-0ca0-446c-b491-4547368cc051,},Annotations:map[string]string{io.kubernetes.container.hash: a09c37b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c,PodSandboxId:0540bff7659810128faae4cbbecdc9f03ae377a30328b75b9b66984076a0b82e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730567588074137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7gbcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf46b6f-da6d-4d8a-9b91-6c11f5225072,},Annotations:map[string]string{io.kubernetes.container.hash: 52a6f937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730560082175160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b,PodSandboxId:1d6379cc912f2b07244de2807b66cea9dd017b7b54194515b1a2e70c30a46ed2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730560028738545,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wk8zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2637a235-d0b5-46f3-bbad-ac7386ce6
1c7,},Annotations:map[string]string{io.kubernetes.container.hash: 52aa126e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc,PodSandboxId:e2fac095f10c13e8ee1fa8a05f391fb51935405646025421ca9ad88f05600679,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730555336246146,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 696fa13e27497b0cd143575077a4c241,},Annotations:map[string]string{io.kub
ernetes.container.hash: b0fd39c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163,PodSandboxId:f34e54e96c7547a5ca6ec74bc86f23d27376fc06bf38c9b3cdcaa1002e7e15df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730555268081158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d702c4a5848aa0880624d62984698a,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: c7c255f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac,PodSandboxId:26709f1531df569a335eec36b159f717f679f6b463fddf1010073c59da95e882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730555251303152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e9db6afa424e7201fe478e5d027be3a,},Annotations:map[string]string{io.kubernetes.container.hash:
7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12,PodSandboxId:8c55ed6a349659c4b2c6c01bdc56cdfb85e021dcc9262e5e372ac765152d6f82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730555227630918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf045931b294cba33c8aecb9fc5fc6c7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84e1e122-90a3-438e-a41b-67b4a81212df name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.169722597Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac85c408-53d3-4b14-8252-dd8b44902bf7 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.169793088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac85c408-53d3-4b14-8252-dd8b44902bf7 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.170995004Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38a268f5-b495-46d9-b083-320e8c420333 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.171422748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731369171400272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38a268f5-b495-46d9-b083-320e8c420333 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.171889150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eda0d341-354b-45ef-bf1c-6f534ea50f2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.171937507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eda0d341-354b-45ef-bf1c-6f534ea50f2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:29 embed-certs-877598 crio[726]: time="2024-08-04 00:29:29.172140547Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730590803109066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c57112216393de3cb0af5ba4d680f81582061ce6565f1fe2d6f785c1dfe08b6,PodSandboxId:7a1dd3f30cd5de949d224e9c84db6f4c3e6efd08138982ab8c1c7e7acd1621b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730570889029060,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6695481-0ca0-446c-b491-4547368cc051,},Annotations:map[string]string{io.kubernetes.container.hash: a09c37b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c,PodSandboxId:0540bff7659810128faae4cbbecdc9f03ae377a30328b75b9b66984076a0b82e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730567588074137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7gbcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf46b6f-da6d-4d8a-9b91-6c11f5225072,},Annotations:map[string]string{io.kubernetes.container.hash: 52a6f937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730560082175160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b,PodSandboxId:1d6379cc912f2b07244de2807b66cea9dd017b7b54194515b1a2e70c30a46ed2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730560028738545,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wk8zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2637a235-d0b5-46f3-bbad-ac7386ce6
1c7,},Annotations:map[string]string{io.kubernetes.container.hash: 52aa126e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc,PodSandboxId:e2fac095f10c13e8ee1fa8a05f391fb51935405646025421ca9ad88f05600679,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730555336246146,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 696fa13e27497b0cd143575077a4c241,},Annotations:map[string]string{io.kub
ernetes.container.hash: b0fd39c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163,PodSandboxId:f34e54e96c7547a5ca6ec74bc86f23d27376fc06bf38c9b3cdcaa1002e7e15df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730555268081158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d702c4a5848aa0880624d62984698a,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: c7c255f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac,PodSandboxId:26709f1531df569a335eec36b159f717f679f6b463fddf1010073c59da95e882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730555251303152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e9db6afa424e7201fe478e5d027be3a,},Annotations:map[string]string{io.kubernetes.container.hash:
7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12,PodSandboxId:8c55ed6a349659c4b2c6c01bdc56cdfb85e021dcc9262e5e372ac765152d6f82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730555227630918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf045931b294cba33c8aecb9fc5fc6c7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eda0d341-354b-45ef-bf1c-6f534ea50f2e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5820e4bb2538f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   ecece52031ec1       storage-provisioner
	8c57112216393       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   7a1dd3f30cd5d       busybox
	102bbb96ee07a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   0540bff765981       coredns-7db6d8ff4d-7gbcf
	b4591fddfa08b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   ecece52031ec1       storage-provisioner
	08432bdee33dc       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   1d6379cc912f2       kube-proxy-wk8zf
	7327ad855d4f6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   e2fac095f10c1       etcd-embed-certs-877598
	d044ac1fa318f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   f34e54e96c754       kube-apiserver-embed-certs-877598
	5cdb842231bc7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   26709f1531df5       kube-scheduler-embed-certs-877598
	d7780d9d7ff2f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   8c55ed6a34965       kube-controller-manager-embed-certs-877598
	
	
	==> coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56021 - 45716 "HINFO IN 4793388100201839205.6480537112018857910. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01519527s
	
	
	==> describe nodes <==
	Name:               embed-certs-877598
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-877598
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=embed-certs-877598
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T00_06_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:06:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-877598
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:29:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:26:40 +0000   Sun, 04 Aug 2024 00:06:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:26:40 +0000   Sun, 04 Aug 2024 00:06:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:26:40 +0000   Sun, 04 Aug 2024 00:06:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:26:40 +0000   Sun, 04 Aug 2024 00:16:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.140
	  Hostname:    embed-certs-877598
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d518e0e244d4c3bb6414a29d58c2ba9
	  System UUID:                9d518e0e-244d-4c3b-b641-4a29d58c2ba9
	  Boot ID:                    f2fd7776-b47a-43b1-9475-185f492b3df2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-7gbcf                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-embed-certs-877598                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-877598             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-877598    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-wk8zf                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-877598             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-hbcm9               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-877598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-877598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-877598 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-877598 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-877598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-877598 status is now: NodeHasSufficientPID
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node embed-certs-877598 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-877598 event: Registered Node embed-certs-877598 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-877598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-877598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-877598 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-877598 event: Registered Node embed-certs-877598 in Controller
	
	
	==> dmesg <==
	[Aug 4 00:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063188] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.051450] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.295183] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.731580] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.444155] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.989590] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.065771] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064255] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.195185] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.122052] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.294854] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.645987] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.059555] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.132092] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +5.708507] kauditd_printk_skb: 97 callbacks suppressed
	[Aug 4 00:16] systemd-fstab-generator[1532]: Ignoring "noauto" option for root device
	[  +1.782019] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.316081] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] <==
	{"level":"info","ts":"2024-08-04T00:15:55.878827Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"77a8f052fa5fccd4","local-member-id":"85ea5ca067fb3fe3","added-peer-id":"85ea5ca067fb3fe3","added-peer-peer-urls":["https://192.168.50.140:2380"]}
	{"level":"info","ts":"2024-08-04T00:15:55.878955Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"77a8f052fa5fccd4","local-member-id":"85ea5ca067fb3fe3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:15:55.878999Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:15:55.89371Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T00:15:55.896053Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"85ea5ca067fb3fe3","initial-advertise-peer-urls":["https://192.168.50.140:2380"],"listen-peer-urls":["https://192.168.50.140:2380"],"advertise-client-urls":["https://192.168.50.140:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.140:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:15:55.896135Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T00:15:55.895768Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.140:2380"}
	{"level":"info","ts":"2024-08-04T00:15:55.896239Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.140:2380"}
	{"level":"info","ts":"2024-08-04T00:15:57.389042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-04T00:15:57.389183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-04T00:15:57.38924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 received MsgPreVoteResp from 85ea5ca067fb3fe3 at term 2"}
	{"level":"info","ts":"2024-08-04T00:15:57.389276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:15:57.38931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 received MsgVoteResp from 85ea5ca067fb3fe3 at term 3"}
	{"level":"info","ts":"2024-08-04T00:15:57.389336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became leader at term 3"}
	{"level":"info","ts":"2024-08-04T00:15:57.389366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 85ea5ca067fb3fe3 elected leader 85ea5ca067fb3fe3 at term 3"}
	{"level":"info","ts":"2024-08-04T00:15:57.400052Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"85ea5ca067fb3fe3","local-member-attributes":"{Name:embed-certs-877598 ClientURLs:[https://192.168.50.140:2379]}","request-path":"/0/members/85ea5ca067fb3fe3/attributes","cluster-id":"77a8f052fa5fccd4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:15:57.400156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:15:57.40026Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:15:57.401046Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:15:57.401115Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:15:57.402953Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.140:2379"}
	{"level":"info","ts":"2024-08-04T00:15:57.403076Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:25:57.436621Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":850}
	{"level":"info","ts":"2024-08-04T00:25:57.446846Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":850,"took":"9.671974ms","hash":2058986012,"current-db-size-bytes":2215936,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2215936,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-04T00:25:57.446976Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2058986012,"revision":850,"compact-revision":-1}
	
	
	==> kernel <==
	 00:29:29 up 13 min,  0 users,  load average: 0.28, 0.26, 0.19
	Linux embed-certs-877598 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] <==
	I0804 00:23:59.768396       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:25:58.768953       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:25:58.769073       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0804 00:25:59.770031       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:25:59.770118       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0804 00:25:59.770135       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:25:59.770194       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:25:59.770246       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0804 00:25:59.771464       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:26:59.770678       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:26:59.770755       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0804 00:26:59.770762       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:26:59.771934       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:26:59.771977       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0804 00:26:59.771984       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:28:59.770905       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:28:59.771203       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0804 00:28:59.771232       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:28:59.772497       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:28:59.772661       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0804 00:28:59.772709       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] <==
	I0804 00:23:44.786122       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:24:14.217677       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:24:14.794383       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:24:44.223198       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:24:44.804235       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:25:14.229036       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:25:14.811615       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:25:44.233224       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:25:44.819430       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:26:14.238933       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:26:14.827413       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:26:44.244310       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:26:44.835971       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:27:07.596380       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="291.381µs"
	E0804 00:27:14.250381       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:27:14.846722       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:27:18.596950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="47.698µs"
	E0804 00:27:44.255834       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:27:44.855750       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:28:14.262542       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:28:14.863133       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:28:44.267536       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:28:44.870298       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:29:14.273186       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:29:14.881167       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] <==
	I0804 00:16:00.239998       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:16:00.254109       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.140"]
	I0804 00:16:00.293729       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:16:00.293832       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:16:00.293850       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:16:00.298831       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:16:00.299086       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:16:00.299120       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:16:00.300801       1 config.go:192] "Starting service config controller"
	I0804 00:16:00.300830       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:16:00.300864       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:16:00.300867       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:16:00.301227       1 config.go:319] "Starting node config controller"
	I0804 00:16:00.301258       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:16:00.401779       1 shared_informer.go:320] Caches are synced for node config
	I0804 00:16:00.401833       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:16:00.401875       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] <==
	I0804 00:15:56.515055       1 serving.go:380] Generated self-signed cert in-memory
	W0804 00:15:58.746042       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 00:15:58.746170       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 00:15:58.746265       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 00:15:58.746289       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 00:15:58.804051       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:15:58.804093       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:15:58.810907       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:15:58.811137       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:15:58.811157       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:15:58.811179       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:15:58.911853       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 04 00:26:54 embed-certs-877598 kubelet[941]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:26:54 embed-certs-877598 kubelet[941]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:26:54 embed-certs-877598 kubelet[941]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:27:07 embed-certs-877598 kubelet[941]: E0804 00:27:07.580841     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:27:18 embed-certs-877598 kubelet[941]: E0804 00:27:18.581462     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:27:32 embed-certs-877598 kubelet[941]: E0804 00:27:32.580883     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:27:46 embed-certs-877598 kubelet[941]: E0804 00:27:46.579863     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:27:54 embed-certs-877598 kubelet[941]: E0804 00:27:54.603140     941 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:27:54 embed-certs-877598 kubelet[941]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:27:54 embed-certs-877598 kubelet[941]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:27:54 embed-certs-877598 kubelet[941]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:27:54 embed-certs-877598 kubelet[941]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:27:59 embed-certs-877598 kubelet[941]: E0804 00:27:59.580774     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:28:10 embed-certs-877598 kubelet[941]: E0804 00:28:10.582449     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:28:23 embed-certs-877598 kubelet[941]: E0804 00:28:23.580453     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:28:35 embed-certs-877598 kubelet[941]: E0804 00:28:35.579827     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:28:50 embed-certs-877598 kubelet[941]: E0804 00:28:50.583222     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:28:54 embed-certs-877598 kubelet[941]: E0804 00:28:54.600005     941 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:28:54 embed-certs-877598 kubelet[941]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:28:54 embed-certs-877598 kubelet[941]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:28:54 embed-certs-877598 kubelet[941]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:28:54 embed-certs-877598 kubelet[941]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:29:03 embed-certs-877598 kubelet[941]: E0804 00:29:03.581156     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:29:15 embed-certs-877598 kubelet[941]: E0804 00:29:15.579949     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:29:27 embed-certs-877598 kubelet[941]: E0804 00:29:27.580340     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	
	
	==> storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] <==
	I0804 00:16:30.907491       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0804 00:16:30.928277       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0804 00:16:30.928509       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0804 00:16:30.942479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0804 00:16:30.942823       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-877598_124f64e3-34ea-493a-a521-c50e141e6a3d!
	I0804 00:16:30.943278       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c5d1ca75-7f2e-4986-ab8d-28a787066197", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-877598_124f64e3-34ea-493a-a521-c50e141e6a3d became leader
	I0804 00:16:31.046703       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-877598_124f64e3-34ea-493a-a521-c50e141e6a3d!
	
	
	==> storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] <==
	I0804 00:16:00.222244       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0804 00:16:30.226800       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-877598 -n embed-certs-877598
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-877598 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-hbcm9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-877598 describe pod metrics-server-569cc877fc-hbcm9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-877598 describe pod metrics-server-569cc877fc-hbcm9: exit status 1 (60.384395ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-hbcm9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-877598 describe pod metrics-server-569cc877fc-hbcm9: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0804 00:20:58.007537   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-118016 -n no-preload-118016
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-04 00:29:34.600396698 +0000 UTC m=+6110.542640735
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118016 -n no-preload-118016
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-118016 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-118016 logs -n 25: (2.244465152s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-551054                                 | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302198                           | kubernetes-upgrade-302198    | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-551054 sudo                            | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-551054                                 | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-877598            | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-705918                              | cert-expiration-705918       | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-423330 | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	|         | disable-driver-mounts-423330                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:09 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-118016             | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC | 04 Aug 24 00:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-576210        | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-969068  | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-877598                 | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-576210             | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-118016                  | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-969068       | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:11:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:11:52.361065   65441 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:11:52.361334   65441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:11:52.361345   65441 out.go:304] Setting ErrFile to fd 2...
	I0804 00:11:52.361349   65441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:11:52.361548   65441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:11:52.362087   65441 out.go:298] Setting JSON to false
	I0804 00:11:52.363002   65441 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6856,"bootTime":1722723456,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:11:52.363061   65441 start.go:139] virtualization: kvm guest
	I0804 00:11:52.365345   65441 out.go:177] * [default-k8s-diff-port-969068] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:11:52.367170   65441 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:11:52.367161   65441 notify.go:220] Checking for updates...
	I0804 00:11:52.369837   65441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:11:52.371134   65441 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:11:52.372226   65441 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:11:52.373445   65441 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:11:52.374802   65441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:11:52.376375   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:11:52.376787   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:11:52.376859   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:11:52.392495   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0804 00:11:52.392954   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:11:52.393477   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:11:52.393497   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:11:52.393883   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:11:52.394048   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:11:52.394313   65441 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:11:52.394606   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:11:52.394638   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:11:52.409194   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0804 00:11:52.409594   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:11:52.410032   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:11:52.410050   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:11:52.410358   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:11:52.410529   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:11:52.445480   65441 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:11:52.446679   65441 start.go:297] selected driver: kvm2
	I0804 00:11:52.446694   65441 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:11:52.446827   65441 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:11:52.447792   65441 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:11:52.447886   65441 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:11:52.462893   65441 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:11:52.463275   65441 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:11:52.463306   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:11:52.463316   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:11:52.463368   65441 start.go:340] cluster config:
	{Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:11:52.463486   65441 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:11:52.465374   65441 out.go:177] * Starting "default-k8s-diff-port-969068" primary control-plane node in "default-k8s-diff-port-969068" cluster
	I0804 00:11:52.466656   65441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:11:52.466698   65441 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:11:52.466710   65441 cache.go:56] Caching tarball of preloaded images
	I0804 00:11:52.466790   65441 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:11:52.466801   65441 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:11:52.466901   65441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:11:52.467100   65441 start.go:360] acquireMachinesLock for default-k8s-diff-port-969068: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:11:55.809602   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:11:58.881666   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:04.961665   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:08.033617   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:14.113634   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:17.185623   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:23.265618   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:26.337594   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:32.417583   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:35.489705   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:41.569654   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:44.641653   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:50.721640   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:53.793649   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:59.873643   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:02.945676   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:09.025652   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:12.097647   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:18.177740   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:21.249606   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:27.329637   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:30.401648   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:36.481588   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:39.553638   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:45.633633   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:48.705646   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:54.785636   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:57.857662   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:03.937643   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:07.009557   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:13.089694   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:16.161619   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:22.241650   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:25.313612   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:28.318586   64758 start.go:364] duration metric: took 4m16.324186239s to acquireMachinesLock for "old-k8s-version-576210"
	I0804 00:14:28.318635   64758 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:28.318646   64758 fix.go:54] fixHost starting: 
	I0804 00:14:28.319092   64758 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:28.319128   64758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:28.334850   64758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0804 00:14:28.335321   64758 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:28.335817   64758 main.go:141] libmachine: Using API Version  1
	I0804 00:14:28.335848   64758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:28.336204   64758 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:28.336435   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:28.336622   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetState
	I0804 00:14:28.338146   64758 fix.go:112] recreateIfNeeded on old-k8s-version-576210: state=Stopped err=<nil>
	I0804 00:14:28.338166   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	W0804 00:14:28.338322   64758 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:28.340640   64758 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-576210" ...
	I0804 00:14:28.315605   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:28.315642   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:14:28.316035   64502 buildroot.go:166] provisioning hostname "embed-certs-877598"
	I0804 00:14:28.316073   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:14:28.316325   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:14:28.318440   64502 machine.go:97] duration metric: took 4m37.42620041s to provisionDockerMachine
	I0804 00:14:28.318477   64502 fix.go:56] duration metric: took 4m37.448052873s for fixHost
	I0804 00:14:28.318485   64502 start.go:83] releasing machines lock for "embed-certs-877598", held for 4m37.44807127s
	W0804 00:14:28.318509   64502 start.go:714] error starting host: provision: host is not running
	W0804 00:14:28.318594   64502 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0804 00:14:28.318606   64502 start.go:729] Will try again in 5 seconds ...
	I0804 00:14:28.342217   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .Start
	I0804 00:14:28.342401   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring networks are active...
	I0804 00:14:28.343274   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network default is active
	I0804 00:14:28.343761   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network mk-old-k8s-version-576210 is active
	I0804 00:14:28.344268   64758 main.go:141] libmachine: (old-k8s-version-576210) Getting domain xml...
	I0804 00:14:28.345080   64758 main.go:141] libmachine: (old-k8s-version-576210) Creating domain...
	I0804 00:14:29.575420   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting to get IP...
	I0804 00:14:29.576307   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.576754   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.576842   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.576711   66003 retry.go:31] will retry after 272.821874ms: waiting for machine to come up
	I0804 00:14:29.851363   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.851951   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.851976   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.851895   66003 retry.go:31] will retry after 247.116514ms: waiting for machine to come up
	I0804 00:14:30.100479   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.100883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.100916   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.100833   66003 retry.go:31] will retry after 353.251065ms: waiting for machine to come up
	I0804 00:14:30.455526   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.455975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.456004   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.455933   66003 retry.go:31] will retry after 558.071575ms: waiting for machine to come up
	I0804 00:14:31.015539   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.015974   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.016000   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.015917   66003 retry.go:31] will retry after 514.757536ms: waiting for machine to come up
	I0804 00:14:31.532799   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.533232   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.533250   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.533186   66003 retry.go:31] will retry after 607.548546ms: waiting for machine to come up
	I0804 00:14:33.318807   64502 start.go:360] acquireMachinesLock for embed-certs-877598: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:14:32.142162   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:32.142658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:32.142693   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:32.142610   66003 retry.go:31] will retry after 897.977595ms: waiting for machine to come up
	I0804 00:14:33.042628   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:33.043002   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:33.043028   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:33.042966   66003 retry.go:31] will retry after 1.094117762s: waiting for machine to come up
	I0804 00:14:34.138946   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:34.139459   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:34.139485   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:34.139414   66003 retry.go:31] will retry after 1.435055372s: waiting for machine to come up
	I0804 00:14:35.576253   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:35.576603   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:35.576625   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:35.576547   66003 retry.go:31] will retry after 1.688006591s: waiting for machine to come up
	I0804 00:14:37.265928   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:37.266429   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:37.266456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:37.266371   66003 retry.go:31] will retry after 2.356818801s: waiting for machine to come up
	I0804 00:14:39.624408   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:39.624832   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:39.624863   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:39.624775   66003 retry.go:31] will retry after 2.41856098s: waiting for machine to come up
	I0804 00:14:46.442402   65087 start.go:364] duration metric: took 3m44.405576801s to acquireMachinesLock for "no-preload-118016"
	I0804 00:14:46.442459   65087 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:46.442469   65087 fix.go:54] fixHost starting: 
	I0804 00:14:46.442938   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:46.442975   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:46.459944   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0804 00:14:46.460375   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:46.460851   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:14:46.460871   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:46.461211   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:46.461402   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:14:46.461538   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:14:46.463097   65087 fix.go:112] recreateIfNeeded on no-preload-118016: state=Stopped err=<nil>
	I0804 00:14:46.463126   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	W0804 00:14:46.463282   65087 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:46.465711   65087 out.go:177] * Restarting existing kvm2 VM for "no-preload-118016" ...
	I0804 00:14:42.044498   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:42.044855   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:42.044882   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:42.044822   66003 retry.go:31] will retry after 3.111190148s: waiting for machine to come up
	I0804 00:14:45.158161   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.158688   64758 main.go:141] libmachine: (old-k8s-version-576210) Found IP for machine: 192.168.72.154
	I0804 00:14:45.158709   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserving static IP address...
	I0804 00:14:45.158719   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has current primary IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.159112   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.159138   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | skip adding static IP to network mk-old-k8s-version-576210 - found existing host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"}
	I0804 00:14:45.159151   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserved static IP address: 192.168.72.154
	I0804 00:14:45.159163   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting for SSH to be available...
	I0804 00:14:45.159172   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Getting to WaitForSSH function...
	I0804 00:14:45.161469   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161782   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.161812   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161936   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH client type: external
	I0804 00:14:45.161975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa (-rw-------)
	I0804 00:14:45.162015   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:14:45.162034   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | About to run SSH command:
	I0804 00:14:45.162044   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | exit 0
	I0804 00:14:45.281546   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | SSH cmd err, output: <nil>: 
	I0804 00:14:45.281859   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetConfigRaw
	I0804 00:14:45.282574   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.284998   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285386   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.285414   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285614   64758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/config.json ...
	I0804 00:14:45.285806   64758 machine.go:94] provisionDockerMachine start ...
	I0804 00:14:45.285823   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:45.286098   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.288285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288640   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.288668   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288753   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.288931   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289088   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289253   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.289426   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.289628   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.289640   64758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:14:45.386001   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:14:45.386036   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386325   64758 buildroot.go:166] provisioning hostname "old-k8s-version-576210"
	I0804 00:14:45.386348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386536   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.389316   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389718   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.389739   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389948   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.390122   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390285   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390415   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.390557   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.390758   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.390776   64758 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-576210 && echo "old-k8s-version-576210" | sudo tee /etc/hostname
	I0804 00:14:45.499644   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-576210
	
	I0804 00:14:45.499695   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.502583   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.502935   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.502959   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.503123   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.503318   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503456   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503570   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.503729   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.503898   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.503915   64758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-576210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-576210/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-576210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:14:45.606971   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:45.607003   64758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:14:45.607045   64758 buildroot.go:174] setting up certificates
	I0804 00:14:45.607053   64758 provision.go:84] configureAuth start
	I0804 00:14:45.607062   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.607327   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.610009   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610378   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.610407   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610545   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.612549   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.612876   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.612908   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.613071   64758 provision.go:143] copyHostCerts
	I0804 00:14:45.613134   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:14:45.613147   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:14:45.613231   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:14:45.613343   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:14:45.613368   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:14:45.613410   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:14:45.613491   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:14:45.613501   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:14:45.613535   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:14:45.613609   64758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-576210 san=[127.0.0.1 192.168.72.154 localhost minikube old-k8s-version-576210]
	I0804 00:14:45.794221   64758 provision.go:177] copyRemoteCerts
	I0804 00:14:45.794276   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:14:45.794299   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.796859   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797182   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.797225   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.797555   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.797687   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.797804   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:45.875704   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:14:45.903765   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0804 00:14:45.930101   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:14:45.955639   64758 provision.go:87] duration metric: took 348.556108ms to configureAuth
	I0804 00:14:45.955668   64758 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:14:45.955874   64758 config.go:182] Loaded profile config "old-k8s-version-576210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:14:45.955960   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.958487   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958835   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.958950   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958970   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.959193   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.959616   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.959789   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.959810   64758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:14:46.217683   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:14:46.217725   64758 machine.go:97] duration metric: took 931.901933ms to provisionDockerMachine
	I0804 00:14:46.217742   64758 start.go:293] postStartSetup for "old-k8s-version-576210" (driver="kvm2")
	I0804 00:14:46.217758   64758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:14:46.217787   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.218127   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:14:46.218151   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.220834   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221148   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.221170   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221342   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.221576   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.221733   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.221867   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.300102   64758 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:14:46.304434   64758 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:14:46.304464   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:14:46.304538   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:14:46.304631   64758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:14:46.304747   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:14:46.314378   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:46.339057   64758 start.go:296] duration metric: took 121.299069ms for postStartSetup
	I0804 00:14:46.339105   64758 fix.go:56] duration metric: took 18.020458894s for fixHost
	I0804 00:14:46.339129   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.341883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342258   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.342285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.342688   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342856   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342992   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.343161   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:46.343385   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:46.343400   64758 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:14:46.442247   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730486.414818212
	
	I0804 00:14:46.442275   64758 fix.go:216] guest clock: 1722730486.414818212
	I0804 00:14:46.442288   64758 fix.go:229] Guest: 2024-08-04 00:14:46.414818212 +0000 UTC Remote: 2024-08-04 00:14:46.339109981 +0000 UTC m=+274.490542023 (delta=75.708231ms)
	I0804 00:14:46.442313   64758 fix.go:200] guest clock delta is within tolerance: 75.708231ms
	I0804 00:14:46.442319   64758 start.go:83] releasing machines lock for "old-k8s-version-576210", held for 18.123699316s
	I0804 00:14:46.442347   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.442656   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:46.445456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.445865   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.445892   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.446069   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446577   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446743   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446816   64758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:14:46.446850   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.446965   64758 ssh_runner.go:195] Run: cat /version.json
	I0804 00:14:46.446987   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.449576   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449794   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449953   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.449983   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450178   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450265   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.450317   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450384   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450520   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450605   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450667   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450733   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.450780   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450910   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.534686   64758 ssh_runner.go:195] Run: systemctl --version
	I0804 00:14:46.554270   64758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:14:46.708220   64758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:14:46.714541   64758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:14:46.714607   64758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:14:46.731642   64758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:14:46.731668   64758 start.go:495] detecting cgroup driver to use...
	I0804 00:14:46.731739   64758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:14:46.748782   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:14:46.763556   64758 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:14:46.763640   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:14:46.778075   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:14:46.793133   64758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:14:46.466927   65087 main.go:141] libmachine: (no-preload-118016) Calling .Start
	I0804 00:14:46.467081   65087 main.go:141] libmachine: (no-preload-118016) Ensuring networks are active...
	I0804 00:14:46.467696   65087 main.go:141] libmachine: (no-preload-118016) Ensuring network default is active
	I0804 00:14:46.468023   65087 main.go:141] libmachine: (no-preload-118016) Ensuring network mk-no-preload-118016 is active
	I0804 00:14:46.468344   65087 main.go:141] libmachine: (no-preload-118016) Getting domain xml...
	I0804 00:14:46.468932   65087 main.go:141] libmachine: (no-preload-118016) Creating domain...
	I0804 00:14:46.918377   64758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:14:47.059683   64758 docker.go:233] disabling docker service ...
	I0804 00:14:47.059753   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:14:47.074819   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:14:47.092184   64758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:14:47.235274   64758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:14:47.357937   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:14:47.375273   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:14:47.395182   64758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0804 00:14:47.395236   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.407036   64758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:14:47.407092   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.418562   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.434481   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.447488   64758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:14:47.460242   64758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:14:47.471089   64758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:14:47.471143   64758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:14:47.486698   64758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:14:47.498754   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:47.630867   64758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:14:47.796598   64758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:14:47.796690   64758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:14:47.802302   64758 start.go:563] Will wait 60s for crictl version
	I0804 00:14:47.802364   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:47.806368   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:14:47.847588   64758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:14:47.847679   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.877936   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.908229   64758 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0804 00:14:47.909635   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:47.912658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913102   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:47.913130   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913438   64758 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0804 00:14:47.917910   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:47.931201   64758 kubeadm.go:883] updating cluster {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:14:47.931318   64758 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:14:47.931381   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:47.980001   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:47.980071   64758 ssh_runner.go:195] Run: which lz4
	I0804 00:14:47.984277   64758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:14:47.988781   64758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:14:47.988810   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0804 00:14:49.706968   64758 crio.go:462] duration metric: took 1.722721175s to copy over tarball
	I0804 00:14:49.707059   64758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:14:47.715321   65087 main.go:141] libmachine: (no-preload-118016) Waiting to get IP...
	I0804 00:14:47.716397   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:47.716853   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:47.716889   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:47.716820   66120 retry.go:31] will retry after 187.841432ms: waiting for machine to come up
	I0804 00:14:47.906481   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:47.906984   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:47.907018   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:47.906942   66120 retry.go:31] will retry after 389.569097ms: waiting for machine to come up
	I0804 00:14:48.298691   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:48.299997   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:48.300021   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:48.299947   66120 retry.go:31] will retry after 382.905254ms: waiting for machine to come up
	I0804 00:14:48.684628   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:48.685095   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:48.685127   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:48.685066   66120 retry.go:31] will retry after 526.267085ms: waiting for machine to come up
	I0804 00:14:49.213459   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:49.214180   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:49.214203   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:49.214142   66120 retry.go:31] will retry after 666.253139ms: waiting for machine to come up
	I0804 00:14:49.882141   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:49.882610   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:49.882639   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:49.882560   66120 retry.go:31] will retry after 776.560525ms: waiting for machine to come up
	I0804 00:14:50.660679   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:50.661149   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:50.661177   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:50.661105   66120 retry.go:31] will retry after 825.927722ms: waiting for machine to come up
	I0804 00:14:51.488562   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:51.488937   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:51.488964   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:51.488894   66120 retry.go:31] will retry after 1.210535859s: waiting for machine to come up
	I0804 00:14:52.511242   64758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.804147671s)
	I0804 00:14:52.511275   64758 crio.go:469] duration metric: took 2.804279705s to extract the tarball
	I0804 00:14:52.511285   64758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:14:52.553905   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:52.587405   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:52.587429   64758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:14:52.587496   64758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.587513   64758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.587550   64758 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.587551   64758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.587554   64758 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.587567   64758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.587570   64758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.587577   64758 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.589240   64758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.589239   64758 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.589247   64758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.589211   64758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.589287   64758 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589579   64758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.742969   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.766505   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.782813   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0804 00:14:52.788509   64758 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0804 00:14:52.788553   64758 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.788598   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.823108   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.829531   64758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0804 00:14:52.829577   64758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.829648   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.858209   64758 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0804 00:14:52.858238   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.858245   64758 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0804 00:14:52.858288   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.888665   64758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0804 00:14:52.888717   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.888748   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0804 00:14:52.888717   64758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.888794   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.918127   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.921386   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0804 00:14:52.929839   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.977866   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0804 00:14:52.977919   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.977960   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0804 00:14:52.994379   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.003198   64758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0804 00:14:53.003233   64758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.003273   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.056310   64758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0804 00:14:53.056338   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0804 00:14:53.056357   64758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.056403   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.062077   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.062119   64758 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0804 00:14:53.062161   64758 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.062206   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.064260   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.114709   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0804 00:14:53.114758   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.118375   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0804 00:14:53.147635   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0804 00:14:53.497155   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:53.647242   64758 cache_images.go:92] duration metric: took 1.059794593s to LoadCachedImages
	W0804 00:14:53.647353   64758 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0804 00:14:53.647370   64758 kubeadm.go:934] updating node { 192.168.72.154 8443 v1.20.0 crio true true} ...
	I0804 00:14:53.647507   64758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-576210 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:14:53.647586   64758 ssh_runner.go:195] Run: crio config
	I0804 00:14:53.710377   64758 cni.go:84] Creating CNI manager for ""
	I0804 00:14:53.710399   64758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:14:53.710411   64758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:14:53.710437   64758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.154 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-576210 NodeName:old-k8s-version-576210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0804 00:14:53.710583   64758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-576210"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:14:53.710661   64758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0804 00:14:53.721942   64758 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:14:53.722005   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:14:53.732623   64758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0804 00:14:53.749878   64758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:14:53.767147   64758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0804 00:14:53.785522   64758 ssh_runner.go:195] Run: grep 192.168.72.154	control-plane.minikube.internal$ /etc/hosts
	I0804 00:14:53.789438   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:53.802152   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:53.934508   64758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:14:53.952247   64758 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210 for IP: 192.168.72.154
	I0804 00:14:53.952280   64758 certs.go:194] generating shared ca certs ...
	I0804 00:14:53.952301   64758 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:53.952470   64758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:14:53.952523   64758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:14:53.952536   64758 certs.go:256] generating profile certs ...
	I0804 00:14:53.952658   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.key
	I0804 00:14:53.952730   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key.5357f842
	I0804 00:14:53.952783   64758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key
	I0804 00:14:53.952948   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:14:53.953000   64758 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:14:53.953013   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:14:53.953048   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:14:53.953084   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:14:53.953114   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:14:53.953191   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:53.954013   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:14:54.001446   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:14:54.029628   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:14:54.062713   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:14:54.090711   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0804 00:14:54.117970   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:14:54.163691   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:14:54.190151   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:14:54.219334   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:14:54.244677   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:14:54.269795   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:14:54.294949   64758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:14:54.312330   64758 ssh_runner.go:195] Run: openssl version
	I0804 00:14:54.318320   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:14:54.328932   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333686   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333737   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.341330   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:14:54.356008   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:14:54.368966   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373896   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373954   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.379770   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:14:54.390903   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:14:54.402637   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407296   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407362   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.413215   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:14:54.424473   64758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:14:54.429673   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:14:54.436038   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:14:54.442091   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:14:54.448507   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:14:54.455421   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:14:54.461969   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:14:54.468042   64758 kubeadm.go:392] StartCluster: {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:14:54.468151   64758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:14:54.468208   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.508109   64758 cri.go:89] found id: ""
	I0804 00:14:54.508183   64758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:14:54.518712   64758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:14:54.518736   64758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:14:54.518788   64758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:14:54.528545   64758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:14:54.529780   64758 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-576210" does not appear in /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:14:54.530411   64758 kubeconfig.go:62] /home/jenkins/minikube-integration/19364-9607/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-576210" cluster setting kubeconfig missing "old-k8s-version-576210" context setting]
	I0804 00:14:54.531316   64758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:54.550431   64758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:14:54.561047   64758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.154
	I0804 00:14:54.561086   64758 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:14:54.561108   64758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:14:54.561163   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.597213   64758 cri.go:89] found id: ""
	I0804 00:14:54.597282   64758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:14:54.612914   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:14:54.622533   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:14:54.622562   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:14:54.622613   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:14:54.632746   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:14:54.632812   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:14:54.642197   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:14:54.651204   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:14:54.651268   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:14:54.660496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.669448   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:14:54.669512   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.678773   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:14:54.687854   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:14:54.687902   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:14:54.697066   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:14:54.707036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:54.840553   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.551919   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.790500   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.898210   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.995621   64758 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:14:55.995711   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:56.496072   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:52.701200   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:52.701574   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:52.701598   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:52.701547   66120 retry.go:31] will retry after 1.518623613s: waiting for machine to come up
	I0804 00:14:54.221367   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:54.221886   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:54.221916   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:54.221835   66120 retry.go:31] will retry after 1.869121058s: waiting for machine to come up
	I0804 00:14:56.092101   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:56.092527   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:56.092550   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:56.092488   66120 retry.go:31] will retry after 2.071227436s: waiting for machine to come up
	I0804 00:14:56.995965   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.496285   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.995805   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.496549   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.996224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.496360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.996056   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:01.496435   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.166383   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:58.166760   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:58.166807   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:58.166729   66120 retry.go:31] will retry after 2.352991709s: waiting for machine to come up
	I0804 00:15:00.522153   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:00.522630   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:15:00.522657   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:15:00.522584   66120 retry.go:31] will retry after 3.326179831s: waiting for machine to come up
	I0804 00:15:05.170439   65441 start.go:364] duration metric: took 3m12.703297591s to acquireMachinesLock for "default-k8s-diff-port-969068"
	I0804 00:15:05.170512   65441 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:15:05.170520   65441 fix.go:54] fixHost starting: 
	I0804 00:15:05.170935   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:05.170974   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:05.188546   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0804 00:15:05.188997   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:05.189494   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:05.189518   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:05.189933   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:05.190132   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:05.190276   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:05.191653   65441 fix.go:112] recreateIfNeeded on default-k8s-diff-port-969068: state=Stopped err=<nil>
	I0804 00:15:05.191684   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	W0804 00:15:05.191834   65441 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:15:05.194275   65441 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-969068" ...
	I0804 00:15:01.996148   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.496756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.996430   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.496646   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.996707   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.496772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.995997   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.496651   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.996384   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.496403   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.850063   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.850518   65087 main.go:141] libmachine: (no-preload-118016) Found IP for machine: 192.168.61.137
	I0804 00:15:03.850544   65087 main.go:141] libmachine: (no-preload-118016) Reserving static IP address...
	I0804 00:15:03.850559   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has current primary IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.850970   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "no-preload-118016", mac: "52:54:00:be:41:20", ip: "192.168.61.137"} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.851001   65087 main.go:141] libmachine: (no-preload-118016) DBG | skip adding static IP to network mk-no-preload-118016 - found existing host DHCP lease matching {name: "no-preload-118016", mac: "52:54:00:be:41:20", ip: "192.168.61.137"}
	I0804 00:15:03.851015   65087 main.go:141] libmachine: (no-preload-118016) Reserved static IP address: 192.168.61.137
	I0804 00:15:03.851030   65087 main.go:141] libmachine: (no-preload-118016) Waiting for SSH to be available...
	I0804 00:15:03.851048   65087 main.go:141] libmachine: (no-preload-118016) DBG | Getting to WaitForSSH function...
	I0804 00:15:03.853316   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.853676   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.853705   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.853819   65087 main.go:141] libmachine: (no-preload-118016) DBG | Using SSH client type: external
	I0804 00:15:03.853850   65087 main.go:141] libmachine: (no-preload-118016) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa (-rw-------)
	I0804 00:15:03.853886   65087 main.go:141] libmachine: (no-preload-118016) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:03.853901   65087 main.go:141] libmachine: (no-preload-118016) DBG | About to run SSH command:
	I0804 00:15:03.853913   65087 main.go:141] libmachine: (no-preload-118016) DBG | exit 0
	I0804 00:15:03.981414   65087 main.go:141] libmachine: (no-preload-118016) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:03.981807   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetConfigRaw
	I0804 00:15:03.982419   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:03.985062   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.985400   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.985433   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.985674   65087 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/config.json ...
	I0804 00:15:03.985857   65087 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:03.985873   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:03.986090   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:03.988490   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.988798   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.988826   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.989017   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:03.989183   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:03.989342   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:03.989510   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:03.989697   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:03.989916   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:03.989927   65087 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:04.106042   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:04.106090   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.106372   65087 buildroot.go:166] provisioning hostname "no-preload-118016"
	I0804 00:15:04.106398   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.106594   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.109434   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.109777   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.109803   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.109919   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.110092   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.110248   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.110423   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.110582   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.110749   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.110764   65087 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-118016 && echo "no-preload-118016" | sudo tee /etc/hostname
	I0804 00:15:04.239856   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-118016
	
	I0804 00:15:04.239884   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.242877   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.243241   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.243271   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.243486   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.243712   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.243897   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.244046   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.244232   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.244420   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.244443   65087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-118016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-118016/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-118016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:04.367259   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:04.367289   65087 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:04.367330   65087 buildroot.go:174] setting up certificates
	I0804 00:15:04.367340   65087 provision.go:84] configureAuth start
	I0804 00:15:04.367432   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.367848   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:04.370330   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.370630   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.370658   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.370744   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.372799   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.373175   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.373203   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.373308   65087 provision.go:143] copyHostCerts
	I0804 00:15:04.373386   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:04.373399   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:04.373458   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:04.373557   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:04.373565   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:04.373585   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:04.373651   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:04.373657   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:04.373675   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:04.373732   65087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.no-preload-118016 san=[127.0.0.1 192.168.61.137 localhost minikube no-preload-118016]
	I0804 00:15:04.467261   65087 provision.go:177] copyRemoteCerts
	I0804 00:15:04.467322   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:04.467347   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.469843   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.470126   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.470154   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.470297   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.470478   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.470644   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.470761   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:04.559980   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:04.585701   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 00:15:04.610270   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:15:04.633954   65087 provision.go:87] duration metric: took 266.53536ms to configureAuth
	I0804 00:15:04.633981   65087 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:04.634154   65087 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:15:04.634219   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.636880   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.637243   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.637271   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.637452   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.637664   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.637823   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.637921   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.638060   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.638234   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.638250   65087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:04.916045   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:04.916077   65087 machine.go:97] duration metric: took 930.20802ms to provisionDockerMachine
	I0804 00:15:04.916088   65087 start.go:293] postStartSetup for "no-preload-118016" (driver="kvm2")
	I0804 00:15:04.916100   65087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:04.916113   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:04.916429   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:04.916453   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.919155   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.919485   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.919514   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.919657   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.919859   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.920026   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.920166   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.012754   65087 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:05.017004   65087 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:05.017024   65087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:05.017091   65087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:05.017180   65087 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:05.017293   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:05.026980   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:05.051265   65087 start.go:296] duration metric: took 135.164451ms for postStartSetup
	I0804 00:15:05.051309   65087 fix.go:56] duration metric: took 18.608839754s for fixHost
	I0804 00:15:05.051331   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.054286   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.054683   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.054710   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.054876   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.055127   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.055321   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.055485   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.055668   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:05.055870   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:05.055882   65087 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:05.170285   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730505.141206116
	
	I0804 00:15:05.170314   65087 fix.go:216] guest clock: 1722730505.141206116
	I0804 00:15:05.170321   65087 fix.go:229] Guest: 2024-08-04 00:15:05.141206116 +0000 UTC Remote: 2024-08-04 00:15:05.051313292 +0000 UTC m=+243.154971169 (delta=89.892824ms)
	I0804 00:15:05.170341   65087 fix.go:200] guest clock delta is within tolerance: 89.892824ms
	I0804 00:15:05.170359   65087 start.go:83] releasing machines lock for "no-preload-118016", held for 18.727925423s
	I0804 00:15:05.170392   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.170673   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:05.173694   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.174084   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.174117   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.174265   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.174828   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.175015   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.175103   65087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:05.175145   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.175263   65087 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:05.175286   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.177906   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178280   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.178307   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178329   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178470   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.178688   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.178777   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.178832   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178854   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.178945   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.179025   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.179111   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.179265   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.179417   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.282397   65087 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:05.288682   65087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:05.434388   65087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:05.440857   65087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:05.440937   65087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:05.461853   65087 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:05.461879   65087 start.go:495] detecting cgroup driver to use...
	I0804 00:15:05.461944   65087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:05.478397   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:05.494093   65087 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:05.494151   65087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:05.509391   65087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:05.524127   65087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:05.640185   65087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:05.784994   65087 docker.go:233] disabling docker service ...
	I0804 00:15:05.785071   65087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:05.802802   65087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:05.818424   65087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:05.970147   65087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:06.099759   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:06.114434   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:06.132989   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:06.433914   65087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0804 00:15:06.433969   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.452155   65087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:06.452245   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.464730   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.475848   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.488341   65087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:06.501984   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.514776   65087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.534773   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.547076   65087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:06.558639   65087 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:06.558695   65087 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:06.572920   65087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:06.583298   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:06.705307   65087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:06.845776   65087 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:06.845840   65087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:06.851710   65087 start.go:563] Will wait 60s for crictl version
	I0804 00:15:06.851764   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:06.855899   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:06.904392   65087 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:06.904493   65087 ssh_runner.go:195] Run: crio --version
	I0804 00:15:06.932866   65087 ssh_runner.go:195] Run: crio --version
	I0804 00:15:06.963071   65087 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0804 00:15:05.195984   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Start
	I0804 00:15:05.196175   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring networks are active...
	I0804 00:15:05.196904   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring network default is active
	I0804 00:15:05.197256   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring network mk-default-k8s-diff-port-969068 is active
	I0804 00:15:05.197709   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Getting domain xml...
	I0804 00:15:05.198474   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Creating domain...
	I0804 00:15:06.489009   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting to get IP...
	I0804 00:15:06.490137   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.490569   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.490641   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:06.490549   66290 retry.go:31] will retry after 298.701839ms: waiting for machine to come up
	I0804 00:15:06.791467   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.791938   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.791960   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:06.791894   66290 retry.go:31] will retry after 373.395742ms: waiting for machine to come up
	I0804 00:15:07.166622   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.167108   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.167139   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:07.167048   66290 retry.go:31] will retry after 404.799649ms: waiting for machine to come up
	I0804 00:15:06.995779   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.495822   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.995970   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.495870   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.996379   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.495852   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.495912   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.996591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:11.495964   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.964314   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:06.967088   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:06.967517   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:06.967547   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:06.967787   65087 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:06.973133   65087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:06.990153   65087 kubeadm.go:883] updating cluster {Name:no-preload-118016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:06.990339   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.297536   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.591746   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.874720   65087 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0804 00:15:07.874798   65087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:07.914104   65087 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0804 00:15:07.914127   65087 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:15:07.914172   65087 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:07.914212   65087 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:07.914237   65087 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0804 00:15:07.914253   65087 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:07.914324   65087 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:07.914374   65087 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:07.914225   65087 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:07.914374   65087 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:07.915814   65087 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:07.915833   65087 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:07.915838   65087 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:07.915816   65087 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:07.915814   65087 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0804 00:15:07.915882   65087 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:07.915962   65087 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:07.916150   65087 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.048225   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.050828   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.051873   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.056880   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.087643   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.091720   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0804 00:15:08.116485   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.173591   65087 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0804 00:15:08.173642   65087 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.173686   65087 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0804 00:15:08.173704   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.173725   65087 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.173777   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.191254   65087 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0804 00:15:08.191298   65087 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.191352   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.195238   65087 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0804 00:15:08.195290   65087 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.195340   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.246005   65087 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0804 00:15:08.246048   65087 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.246100   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.336855   65087 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0804 00:15:08.336936   65087 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.336945   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.336965   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.337078   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.337120   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.337161   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.337207   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.425270   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.425297   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.425296   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:08.425455   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.425522   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:08.458378   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0804 00:15:08.458520   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:08.460719   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:08.460827   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:08.460889   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0804 00:15:08.460983   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:08.492690   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0804 00:15:08.492789   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492808   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.492839   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:08.492852   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.492863   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492932   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492976   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0804 00:15:08.493036   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0804 00:15:08.763401   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:11.063302   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.570424927s)
	I0804 00:15:11.063326   65087 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0: (2.570469177s)
	I0804 00:15:11.063341   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0804 00:15:11.063348   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0804 00:15:11.063355   65087 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:11.063377   65087 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.299939136s)
	I0804 00:15:11.063414   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:11.063438   65087 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0804 00:15:11.063468   65087 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:11.063516   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:07.573639   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.574103   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.574150   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:07.574068   66290 retry.go:31] will retry after 552.033422ms: waiting for machine to come up
	I0804 00:15:08.127755   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.128317   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.128345   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:08.128254   66290 retry.go:31] will retry after 601.661676ms: waiting for machine to come up
	I0804 00:15:08.731160   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.731571   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.731596   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:08.731526   66290 retry.go:31] will retry after 899.954536ms: waiting for machine to come up
	I0804 00:15:09.632769   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:09.633217   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:09.633275   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:09.633188   66290 retry.go:31] will retry after 1.096119877s: waiting for machine to come up
	I0804 00:15:10.731586   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:10.732092   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:10.732116   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:10.732062   66290 retry.go:31] will retry after 1.09033143s: waiting for machine to come up
	I0804 00:15:11.824287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:11.824697   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:11.824723   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:11.824648   66290 retry.go:31] will retry after 1.458040473s: waiting for machine to come up
	I0804 00:15:11.996494   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.496005   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.996429   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.496310   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.996525   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.495995   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.996172   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.495809   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.996016   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:16.496210   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.840723   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.777281435s)
	I0804 00:15:14.840759   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0804 00:15:14.840758   65087 ssh_runner.go:235] Completed: which crictl: (3.777229082s)
	I0804 00:15:14.840769   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:14.840815   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:14.840815   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:14.894482   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0804 00:15:14.894607   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:16.729218   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (1.888374505s)
	I0804 00:15:16.729270   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0804 00:15:16.729277   65087 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.834630766s)
	I0804 00:15:16.729304   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:16.729312   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0804 00:15:16.729368   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:13.284961   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:13.285403   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:13.285435   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:13.285332   66290 retry.go:31] will retry after 2.307816709s: waiting for machine to come up
	I0804 00:15:15.594435   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:15.594855   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:15.594885   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:15.594804   66290 retry.go:31] will retry after 2.83542957s: waiting for machine to come up
	I0804 00:15:16.996765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.496069   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.995828   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.495847   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.996276   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.496155   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.996708   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.996145   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:21.496193   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.031187   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.301792704s)
	I0804 00:15:19.031309   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0804 00:15:19.031343   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:19.031389   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:20.493093   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.461677557s)
	I0804 00:15:20.493134   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0804 00:15:20.493152   65087 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:20.493202   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:18.433690   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:18.434156   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:18.434188   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:18.434105   66290 retry.go:31] will retry after 2.563856777s: waiting for machine to come up
	I0804 00:15:20.999804   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:21.000275   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:21.000307   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:21.000236   66290 retry.go:31] will retry after 3.783170851s: waiting for machine to come up
	I0804 00:15:26.095635   64502 start.go:364] duration metric: took 52.776761645s to acquireMachinesLock for "embed-certs-877598"
	I0804 00:15:26.095695   64502 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:15:26.095703   64502 fix.go:54] fixHost starting: 
	I0804 00:15:26.096104   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:26.096143   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:26.113770   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0804 00:15:26.114303   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:26.114742   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:15:26.114768   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:26.115137   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:26.115330   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:26.115508   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:15:26.117156   64502 fix.go:112] recreateIfNeeded on embed-certs-877598: state=Stopped err=<nil>
	I0804 00:15:26.117179   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	W0804 00:15:26.117343   64502 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:15:26.119743   64502 out.go:177] * Restarting existing kvm2 VM for "embed-certs-877598" ...
	I0804 00:15:21.996520   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.495922   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.995766   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.495923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.995770   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.496788   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.996759   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.996017   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.496445   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.363529   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.870304087s)
	I0804 00:15:22.363559   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0804 00:15:22.363573   65087 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:22.363618   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:23.009879   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0804 00:15:23.009924   65087 cache_images.go:123] Successfully loaded all cached images
	I0804 00:15:23.009932   65087 cache_images.go:92] duration metric: took 15.095790334s to LoadCachedImages
	I0804 00:15:23.009946   65087 kubeadm.go:934] updating node { 192.168.61.137 8443 v1.31.0-rc.0 crio true true} ...
	I0804 00:15:23.010145   65087 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-118016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:23.010230   65087 ssh_runner.go:195] Run: crio config
	I0804 00:15:23.057968   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:15:23.057991   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:23.058002   65087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:23.058022   65087 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.137 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-118016 NodeName:no-preload-118016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:23.058149   65087 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-118016"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:23.058210   65087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0804 00:15:23.068635   65087 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:23.068713   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:23.077867   65087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0804 00:15:23.094220   65087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0804 00:15:23.110798   65087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0804 00:15:23.132230   65087 ssh_runner.go:195] Run: grep 192.168.61.137	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:23.136622   65087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:23.149229   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:23.284623   65087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:23.309115   65087 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016 for IP: 192.168.61.137
	I0804 00:15:23.309212   65087 certs.go:194] generating shared ca certs ...
	I0804 00:15:23.309242   65087 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:23.309451   65087 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:23.309509   65087 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:23.309525   65087 certs.go:256] generating profile certs ...
	I0804 00:15:23.309633   65087 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.key
	I0804 00:15:23.309718   65087 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.key.794a08a1
	I0804 00:15:23.309775   65087 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.key
	I0804 00:15:23.309951   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:23.309992   65087 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:23.310006   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:23.310050   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:23.310084   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:23.310125   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:23.310186   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:23.310811   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:23.346479   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:23.390508   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:23.419626   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:23.453891   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 00:15:23.481597   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:23.507749   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:23.537567   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:15:23.565469   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:23.590844   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:23.618748   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:23.645921   65087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:23.664034   65087 ssh_runner.go:195] Run: openssl version
	I0804 00:15:23.670083   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:23.681080   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.685717   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.685777   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.691573   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:23.702260   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:23.713185   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.717747   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.717803   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.723598   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:23.734445   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:23.745394   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.750239   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.750312   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.756471   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:23.767795   65087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:23.772483   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:23.778613   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:23.784560   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:23.790455   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:23.796260   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:23.802405   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:23.808623   65087 kubeadm.go:392] StartCluster: {Name:no-preload-118016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:23.808710   65087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:23.808753   65087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:23.857908   65087 cri.go:89] found id: ""
	I0804 00:15:23.857983   65087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:23.868694   65087 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:23.868717   65087 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:23.868789   65087 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:23.878826   65087 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:23.879879   65087 kubeconfig.go:125] found "no-preload-118016" server: "https://192.168.61.137:8443"
	I0804 00:15:23.882653   65087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:23.893441   65087 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.137
	I0804 00:15:23.893475   65087 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:23.893489   65087 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:23.893533   65087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:23.933954   65087 cri.go:89] found id: ""
	I0804 00:15:23.934026   65087 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:23.951080   65087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:23.962250   65087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:23.962274   65087 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:23.962327   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:15:23.971760   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:23.971817   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:23.981767   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:15:23.991443   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:23.991494   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:24.001911   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:15:24.011927   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:24.011988   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:24.022349   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:15:24.032305   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:24.032371   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:24.042416   65087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:24.052403   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:24.163413   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.106900   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.323496   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.410928   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.569137   65087 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:25.569221   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.069288   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.570343   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.615965   65087 api_server.go:72] duration metric: took 1.046825245s to wait for apiserver process to appear ...
	I0804 00:15:26.615997   65087 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:26.616022   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:26.616618   65087 api_server.go:269] stopped: https://192.168.61.137:8443/healthz: Get "https://192.168.61.137:8443/healthz": dial tcp 192.168.61.137:8443: connect: connection refused
	I0804 00:15:24.788329   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.788775   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Found IP for machine: 192.168.39.132
	I0804 00:15:24.788799   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has current primary IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.788811   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Reserving static IP address...
	I0804 00:15:24.789238   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-969068", mac: "52:54:00:60:ac:10", ip: "192.168.39.132"} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.789266   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | skip adding static IP to network mk-default-k8s-diff-port-969068 - found existing host DHCP lease matching {name: "default-k8s-diff-port-969068", mac: "52:54:00:60:ac:10", ip: "192.168.39.132"}
	I0804 00:15:24.789287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Reserved static IP address: 192.168.39.132
	I0804 00:15:24.789303   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for SSH to be available...
	I0804 00:15:24.789333   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Getting to WaitForSSH function...
	I0804 00:15:24.791371   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.791734   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.791762   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.791904   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Using SSH client type: external
	I0804 00:15:24.791934   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa (-rw-------)
	I0804 00:15:24.791975   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:24.791994   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | About to run SSH command:
	I0804 00:15:24.792010   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | exit 0
	I0804 00:15:24.921420   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:24.921795   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetConfigRaw
	I0804 00:15:24.922375   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:24.925074   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.925403   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.925431   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.925680   65441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:15:24.925904   65441 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:24.925924   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:24.926120   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:24.928597   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.929006   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.929045   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.929171   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:24.929334   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:24.929498   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:24.929634   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:24.929814   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:24.930001   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:24.930012   65441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:25.046325   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:25.046355   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.046703   65441 buildroot.go:166] provisioning hostname "default-k8s-diff-port-969068"
	I0804 00:15:25.046733   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.046940   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.049807   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.050383   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.050427   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.050547   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.050739   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.050937   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.051131   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.051296   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.051504   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.051525   65441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-969068 && echo "default-k8s-diff-port-969068" | sudo tee /etc/hostname
	I0804 00:15:25.182512   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-969068
	
	I0804 00:15:25.182552   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.185673   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.186019   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.186051   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.186241   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.186425   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.186551   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.186660   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.186853   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.187034   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.187051   65441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-969068' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-969068/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-969068' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:25.313435   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:25.313470   65441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:25.313518   65441 buildroot.go:174] setting up certificates
	I0804 00:15:25.313531   65441 provision.go:84] configureAuth start
	I0804 00:15:25.313544   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.313856   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:25.316883   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.317233   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.317287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.317475   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.319773   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.320180   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.320214   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.320404   65441 provision.go:143] copyHostCerts
	I0804 00:15:25.320459   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:25.320467   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:25.320531   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:25.320666   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:25.320675   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:25.320702   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:25.320769   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:25.320777   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:25.320804   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:25.320871   65441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-969068 san=[127.0.0.1 192.168.39.132 default-k8s-diff-port-969068 localhost minikube]
	I0804 00:15:25.374535   65441 provision.go:177] copyRemoteCerts
	I0804 00:15:25.374590   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:25.374613   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.377629   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.378047   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.378073   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.378254   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.378478   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.378672   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.378897   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:25.469632   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:25.495826   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0804 00:15:25.527006   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:15:25.557603   65441 provision.go:87] duration metric: took 244.055462ms to configureAuth
	I0804 00:15:25.557637   65441 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:25.557873   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:25.557982   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.560974   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.561339   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.561389   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.561570   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.561740   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.561881   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.562043   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.562248   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.562456   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.562471   65441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:25.835452   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:25.835480   65441 machine.go:97] duration metric: took 909.563441ms to provisionDockerMachine
	I0804 00:15:25.835496   65441 start.go:293] postStartSetup for "default-k8s-diff-port-969068" (driver="kvm2")
	I0804 00:15:25.835512   65441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:25.835541   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:25.835846   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:25.835873   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.838713   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.839124   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.839151   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.839287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.839465   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.839634   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.839779   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:25.928376   65441 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:25.932472   65441 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:25.932498   65441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:25.932608   65441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:25.932775   65441 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:25.932951   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:25.943100   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:25.969517   65441 start.go:296] duration metric: took 134.003956ms for postStartSetup
	I0804 00:15:25.969567   65441 fix.go:56] duration metric: took 20.799045329s for fixHost
	I0804 00:15:25.969591   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.972743   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.973172   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.973204   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.973342   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.973596   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.973768   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.973944   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.974158   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.974330   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.974343   65441 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:26.095438   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730526.053053982
	
	I0804 00:15:26.095462   65441 fix.go:216] guest clock: 1722730526.053053982
	I0804 00:15:26.095472   65441 fix.go:229] Guest: 2024-08-04 00:15:26.053053982 +0000 UTC Remote: 2024-08-04 00:15:25.969572309 +0000 UTC m=+213.641216658 (delta=83.481673ms)
	I0804 00:15:26.095524   65441 fix.go:200] guest clock delta is within tolerance: 83.481673ms
	I0804 00:15:26.095534   65441 start.go:83] releasing machines lock for "default-k8s-diff-port-969068", held for 20.925048627s
	I0804 00:15:26.095570   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.095862   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:26.098718   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.099112   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.099145   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.099305   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.099929   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.100108   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.100182   65441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:26.100222   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:26.100347   65441 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:26.100388   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:26.103393   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.103720   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.103942   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.103963   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.104142   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:26.104159   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.104243   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.104347   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:26.104384   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:26.104499   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:26.104545   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:26.104718   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:26.104728   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:26.104881   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:26.214704   65441 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:26.221287   65441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:26.378021   65441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:26.385673   65441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:26.385751   65441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:26.403073   65441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:26.403104   65441 start.go:495] detecting cgroup driver to use...
	I0804 00:15:26.403193   65441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:26.421108   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:26.435556   65441 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:26.435627   65441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:26.455219   65441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:26.477841   65441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:26.626980   65441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:26.806808   65441 docker.go:233] disabling docker service ...
	I0804 00:15:26.806887   65441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:26.824079   65441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:26.839225   65441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:26.967375   65441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:27.136156   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:27.151822   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:27.173326   65441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:15:27.173404   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.184431   65441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:27.184509   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.194890   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.208349   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.222326   65441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:27.237212   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.249571   65441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.274913   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.288929   65441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:27.305789   65441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:27.305863   65441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:27.321708   65441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:27.332129   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:27.482279   65441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:27.638388   65441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:27.638465   65441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:27.644607   65441 start.go:563] Will wait 60s for crictl version
	I0804 00:15:27.644665   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:15:27.648663   65441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:27.691731   65441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:27.691824   65441 ssh_runner.go:195] Run: crio --version
	I0804 00:15:27.731365   65441 ssh_runner.go:195] Run: crio --version
	I0804 00:15:27.767416   65441 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:15:26.121074   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Start
	I0804 00:15:26.121263   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring networks are active...
	I0804 00:15:26.122075   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring network default is active
	I0804 00:15:26.122471   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring network mk-embed-certs-877598 is active
	I0804 00:15:26.122884   64502 main.go:141] libmachine: (embed-certs-877598) Getting domain xml...
	I0804 00:15:26.123684   64502 main.go:141] libmachine: (embed-certs-877598) Creating domain...
	I0804 00:15:27.536026   64502 main.go:141] libmachine: (embed-certs-877598) Waiting to get IP...
	I0804 00:15:27.537165   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:27.537650   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:27.537734   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:27.537654   66522 retry.go:31] will retry after 277.473157ms: waiting for machine to come up
	I0804 00:15:27.817330   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:27.817824   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:27.817858   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:27.817788   66522 retry.go:31] will retry after 322.160841ms: waiting for machine to come up
	I0804 00:15:28.141287   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.141818   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.141855   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.141775   66522 retry.go:31] will retry after 325.833359ms: waiting for machine to come up
	I0804 00:15:28.469440   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.469976   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.470015   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.469933   66522 retry.go:31] will retry after 372.304971ms: waiting for machine to come up
	I0804 00:15:28.843604   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.844376   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.844400   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.844297   66522 retry.go:31] will retry after 607.361674ms: waiting for machine to come up
	I0804 00:15:29.453082   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:29.453557   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:29.453586   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:29.453527   66522 retry.go:31] will retry after 615.002468ms: waiting for machine to come up
	I0804 00:15:30.070598   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:30.071112   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:30.071134   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:30.071079   66522 retry.go:31] will retry after 834.292107ms: waiting for machine to come up
	I0804 00:15:27.116719   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.030589   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:30.030625   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:30.030641   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.091459   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:30.091494   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:30.116633   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.149335   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:30.149394   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:30.617009   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.622086   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:30.622117   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:31.116320   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:31.125065   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:31.125143   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:31.617091   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:31.627142   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0804 00:15:31.636371   65087 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:15:31.636405   65087 api_server.go:131] duration metric: took 5.020400356s to wait for apiserver health ...
	I0804 00:15:31.636414   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:15:31.636420   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:31.638145   65087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:26.996399   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.496810   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.995825   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.496395   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.996561   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.496735   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.996542   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.496406   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.996259   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.496307   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.639553   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:15:31.658269   65087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:15:31.685188   65087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:15:31.703581   65087 system_pods.go:59] 8 kube-system pods found
	I0804 00:15:31.703627   65087 system_pods.go:61] "coredns-6f6b679f8f-9vdxc" [fd645695-cc1d-4394-96b0-832f48e9cf26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:15:31.703638   65087 system_pods.go:61] "etcd-no-preload-118016" [a329ecd7-7574-48f4-a776-7b7c05465f8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:15:31.703649   65087 system_pods.go:61] "kube-apiserver-no-preload-118016" [43d313aa-1844-488d-8925-b744f504323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:15:31.703661   65087 system_pods.go:61] "kube-controller-manager-no-preload-118016" [d56a5461-29d3-47f7-95df-a7fc6b52ca2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:15:31.703669   65087 system_pods.go:61] "kube-proxy-8bcg7" [c2b43118-5216-41bf-9f16-00f11ca1eab5] Running
	I0804 00:15:31.703678   65087 system_pods.go:61] "kube-scheduler-no-preload-118016" [53dc528c-2f00-4ca6-86c6-d02f4533229d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:15:31.703687   65087 system_pods.go:61] "metrics-server-6867b74b74-5xfgz" [c558b60d-3816-406a-addb-96cd42266bd1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:15:31.703698   65087 system_pods.go:61] "storage-provisioner" [1edb442e-272f-4ef7-b3fb-7c46b915c61a] Running
	I0804 00:15:31.703707   65087 system_pods.go:74] duration metric: took 18.49198ms to wait for pod list to return data ...
	I0804 00:15:31.703721   65087 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:15:31.712702   65087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:15:31.712735   65087 node_conditions.go:123] node cpu capacity is 2
	I0804 00:15:31.712748   65087 node_conditions.go:105] duration metric: took 9.019815ms to run NodePressure ...
	I0804 00:15:31.712773   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:27.768972   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:27.772437   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:27.772860   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:27.772903   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:27.773135   65441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:27.777834   65441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:27.792279   65441 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:27.792437   65441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:15:27.792493   65441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:27.833330   65441 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:15:27.833453   65441 ssh_runner.go:195] Run: which lz4
	I0804 00:15:27.837836   65441 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:15:27.842093   65441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:15:27.842128   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:15:29.410529   65441 crio.go:462] duration metric: took 1.572735301s to copy over tarball
	I0804 00:15:29.410610   65441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:15:32.062492   65441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.651848511s)
	I0804 00:15:32.062533   65441 crio.go:469] duration metric: took 2.651972207s to extract the tarball
	I0804 00:15:32.062545   65441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:15:32.100003   65441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:32.144166   65441 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:15:32.144192   65441 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:15:32.144201   65441 kubeadm.go:934] updating node { 192.168.39.132 8444 v1.30.3 crio true true} ...
	I0804 00:15:32.144327   65441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-969068 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:32.144434   65441 ssh_runner.go:195] Run: crio config
	I0804 00:15:32.197593   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:15:32.197618   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:32.197630   65441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:32.197658   65441 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-969068 NodeName:default-k8s-diff-port-969068 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:32.197862   65441 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.132
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-969068"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:32.197937   65441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:15:32.208469   65441 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:32.208551   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:32.218194   65441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0804 00:15:32.237731   65441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:15:32.259599   65441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0804 00:15:32.281113   65441 ssh_runner.go:195] Run: grep 192.168.39.132	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:32.285559   65441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:32.298722   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:30.906612   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:30.907056   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:30.907086   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:30.907012   66522 retry.go:31] will retry after 1.489076061s: waiting for machine to come up
	I0804 00:15:32.397239   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:32.397614   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:32.397642   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:32.397568   66522 retry.go:31] will retry after 1.737097329s: waiting for machine to come up
	I0804 00:15:34.135859   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:34.136363   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:34.136393   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:34.136321   66522 retry.go:31] will retry after 2.154712298s: waiting for machine to come up
	I0804 00:15:31.996780   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.496164   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.996444   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.496838   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.996533   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.496300   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.996772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.495937   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.996834   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.496277   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.982926   65087 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:15:31.989888   65087 kubeadm.go:739] kubelet initialised
	I0804 00:15:31.989926   65087 kubeadm.go:740] duration metric: took 6.968445ms waiting for restarted kubelet to initialise ...
	I0804 00:15:31.989938   65087 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:31.997210   65087 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:34.748142   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:32.432400   65441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:32.450525   65441 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068 for IP: 192.168.39.132
	I0804 00:15:32.450548   65441 certs.go:194] generating shared ca certs ...
	I0804 00:15:32.450571   65441 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:32.450738   65441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:32.450801   65441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:32.450815   65441 certs.go:256] generating profile certs ...
	I0804 00:15:32.450922   65441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.key
	I0804 00:15:32.451000   65441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.key.a17bd5dd
	I0804 00:15:32.451053   65441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.key
	I0804 00:15:32.451199   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:32.451242   65441 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:32.451255   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:32.451279   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:32.451303   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:32.451326   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:32.451365   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:32.451910   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:32.505178   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:32.557546   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:32.596512   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:32.635476   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0804 00:15:32.687156   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:32.716537   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:32.746312   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:15:32.777788   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:32.806730   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:32.835822   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:32.864241   65441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:32.886754   65441 ssh_runner.go:195] Run: openssl version
	I0804 00:15:32.893177   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:32.904847   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.909871   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.909937   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.916357   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:32.927322   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:32.939447   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.944221   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.944275   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.950218   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:32.966506   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:32.981288   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.986761   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.986831   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.993077   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:33.007428   65441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:33.013290   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:33.019997   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:33.026423   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:33.033004   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:33.039205   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:33.045367   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:33.051462   65441 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:33.051546   65441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:33.051605   65441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:33.094354   65441 cri.go:89] found id: ""
	I0804 00:15:33.094433   65441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:33.105416   65441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:33.105439   65441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:33.105480   65441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:33.115838   65441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:33.117466   65441 kubeconfig.go:125] found "default-k8s-diff-port-969068" server: "https://192.168.39.132:8444"
	I0804 00:15:33.120806   65441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:33.130533   65441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.132
	I0804 00:15:33.130567   65441 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:33.130579   65441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:33.130628   65441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:33.178718   65441 cri.go:89] found id: ""
	I0804 00:15:33.178813   65441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:33.199000   65441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:33.212169   65441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:33.212188   65441 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:33.212255   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0804 00:15:33.225192   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:33.225254   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:33.239194   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0804 00:15:33.252402   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:33.252470   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:33.265198   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0804 00:15:33.276564   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:33.276636   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:33.288785   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0804 00:15:33.299848   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:33.299904   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:33.311115   65441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:33.322121   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:33.442578   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.526815   65441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084197731s)
	I0804 00:15:34.526857   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.803105   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.893681   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.978573   65441 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:34.978668   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.479179   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.979520   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.063056   65441 api_server.go:72] duration metric: took 1.084463955s to wait for apiserver process to appear ...
	I0804 00:15:36.063161   65441 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:36.063203   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:36.063755   65441 api_server.go:269] stopped: https://192.168.39.132:8444/healthz: Get "https://192.168.39.132:8444/healthz": dial tcp 192.168.39.132:8444: connect: connection refused
	I0804 00:15:36.563501   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:36.293051   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:36.293675   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:36.293710   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:36.293604   66522 retry.go:31] will retry after 2.826050203s: waiting for machine to come up
	I0804 00:15:39.120961   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:39.121602   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:39.121628   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:39.121554   66522 retry.go:31] will retry after 2.710829438s: waiting for machine to come up
	I0804 00:15:36.996761   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.495885   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.995785   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.496550   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.996645   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.995851   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.496685   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.995896   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:41.495864   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.005216   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:39.505397   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:39.405829   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:39.405895   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:39.405913   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:39.433026   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:39.433063   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:39.563242   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:39.568554   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:39.568591   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:40.064078   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:40.085940   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:40.085978   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:40.564041   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:40.569785   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:40.569812   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:41.063334   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:41.068113   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:41.068135   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:41.563691   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:41.569214   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:41.569248   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:42.063737   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:42.068227   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:42.068260   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:42.563309   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:42.567740   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:42.567775   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:43.063306   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:43.067611   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 200:
	ok
	I0804 00:15:43.073842   65441 api_server.go:141] control plane version: v1.30.3
	I0804 00:15:43.073868   65441 api_server.go:131] duration metric: took 7.010684682s to wait for apiserver health ...
	I0804 00:15:43.073879   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:15:43.073887   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:43.075779   65441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:43.077123   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:15:43.088611   65441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:15:43.109845   65441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:15:43.119204   65441 system_pods.go:59] 8 kube-system pods found
	I0804 00:15:43.119235   65441 system_pods.go:61] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:15:43.119246   65441 system_pods.go:61] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:15:43.119259   65441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:15:43.119269   65441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:15:43.119275   65441 system_pods.go:61] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:15:43.119282   65441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:15:43.119300   65441 system_pods.go:61] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:15:43.119309   65441 system_pods.go:61] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:15:43.119317   65441 system_pods.go:74] duration metric: took 9.453775ms to wait for pod list to return data ...
	I0804 00:15:43.119328   65441 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:15:43.122493   65441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:15:43.122516   65441 node_conditions.go:123] node cpu capacity is 2
	I0804 00:15:43.122528   65441 node_conditions.go:105] duration metric: took 3.191087ms to run NodePressure ...
	I0804 00:15:43.122547   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:43.391258   65441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:15:43.395252   65441 kubeadm.go:739] kubelet initialised
	I0804 00:15:43.395274   65441 kubeadm.go:740] duration metric: took 3.992079ms waiting for restarted kubelet to initialise ...
	I0804 00:15:43.395282   65441 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:43.400173   65441 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.404618   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.404645   65441 pod_ready.go:81] duration metric: took 4.449232ms for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.404665   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.404675   65441 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.409134   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.409165   65441 pod_ready.go:81] duration metric: took 4.471898ms for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.409178   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.409190   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.414342   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.414362   65441 pod_ready.go:81] duration metric: took 5.160435ms for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.414374   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.414383   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.513956   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.513987   65441 pod_ready.go:81] duration metric: took 99.59507ms for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.514003   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.514033   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.913592   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-proxy-zz7fr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.913619   65441 pod_ready.go:81] duration metric: took 399.572927ms for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.913628   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-proxy-zz7fr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.913634   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.313833   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.313864   65441 pod_ready.go:81] duration metric: took 400.220214ms for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:44.313878   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.313886   65441 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.713583   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.713616   65441 pod_ready.go:81] duration metric: took 399.716432ms for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:44.713636   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.713647   65441 pod_ready.go:38] duration metric: took 1.318356042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:44.713666   65441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:15:44.725908   65441 ops.go:34] apiserver oom_adj: -16
	I0804 00:15:44.725935   65441 kubeadm.go:597] duration metric: took 11.620489409s to restartPrimaryControlPlane
	I0804 00:15:44.725947   65441 kubeadm.go:394] duration metric: took 11.674491721s to StartCluster
	I0804 00:15:44.725966   65441 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:44.726046   65441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:15:44.728392   65441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:44.728702   65441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:15:44.728805   65441 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:15:44.728895   65441 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.728942   65441 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.728954   65441 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:15:44.728958   65441 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.728990   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.728967   65441 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.729027   65441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-969068"
	I0804 00:15:44.729039   65441 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.729054   65441 addons.go:243] addon metrics-server should already be in state true
	I0804 00:15:44.729143   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.729436   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729470   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.729515   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729564   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729598   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.729642   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.728909   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:44.730486   65441 out.go:177] * Verifying Kubernetes components...
	I0804 00:15:44.731972   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:44.748737   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0804 00:15:44.749200   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I0804 00:15:44.749311   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I0804 00:15:44.749582   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.749691   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.749858   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.750128   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750144   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750153   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750171   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750326   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750347   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750609   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.750617   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.750810   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.751212   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.751249   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.751286   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.751733   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.751780   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.754574   65441 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.754616   65441 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:15:44.754649   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.755038   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.755080   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.769763   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0804 00:15:44.770311   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.770828   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.770850   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.771209   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.771371   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.771935   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43081
	I0804 00:15:44.773284   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.773416   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I0804 00:15:44.773646   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.773854   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.773866   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.773981   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.774227   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.774529   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.774551   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.774665   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.774711   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.774938   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.775078   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.776166   65441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:15:44.776690   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.777692   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:15:44.777708   65441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:15:44.777724   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.778473   65441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:41.833728   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:41.834246   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:41.834270   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:41.834210   66522 retry.go:31] will retry after 2.891635961s: waiting for machine to come up
	I0804 00:15:44.727424   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.727895   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has current primary IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.727919   64502 main.go:141] libmachine: (embed-certs-877598) Found IP for machine: 192.168.50.140
	I0804 00:15:44.727943   64502 main.go:141] libmachine: (embed-certs-877598) Reserving static IP address...
	I0804 00:15:44.728570   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "embed-certs-877598", mac: "52:54:00:86:aa:38", ip: "192.168.50.140"} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.728602   64502 main.go:141] libmachine: (embed-certs-877598) DBG | skip adding static IP to network mk-embed-certs-877598 - found existing host DHCP lease matching {name: "embed-certs-877598", mac: "52:54:00:86:aa:38", ip: "192.168.50.140"}
	I0804 00:15:44.728617   64502 main.go:141] libmachine: (embed-certs-877598) Reserved static IP address: 192.168.50.140
	I0804 00:15:44.728634   64502 main.go:141] libmachine: (embed-certs-877598) Waiting for SSH to be available...
	I0804 00:15:44.728648   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Getting to WaitForSSH function...
	I0804 00:15:44.731684   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.732102   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.732137   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.732388   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Using SSH client type: external
	I0804 00:15:44.732408   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa (-rw-------)
	I0804 00:15:44.732438   64502 main.go:141] libmachine: (embed-certs-877598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:44.732448   64502 main.go:141] libmachine: (embed-certs-877598) DBG | About to run SSH command:
	I0804 00:15:44.732462   64502 main.go:141] libmachine: (embed-certs-877598) DBG | exit 0
	I0804 00:15:44.873689   64502 main.go:141] libmachine: (embed-certs-877598) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:44.874033   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetConfigRaw
	I0804 00:15:44.874716   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:44.877406   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.877823   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.877855   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.878130   64502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/config.json ...
	I0804 00:15:44.878358   64502 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:44.878382   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:44.878563   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:44.880862   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.881215   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.881253   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.881427   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:44.881597   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:44.881785   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:44.881958   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:44.882150   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:44.882381   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:44.882399   64502 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:44.998143   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:44.998172   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:44.998534   64502 buildroot.go:166] provisioning hostname "embed-certs-877598"
	I0804 00:15:44.998564   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:44.998761   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.001998   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.002508   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.002545   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.002691   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.002847   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.003026   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.003175   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.003388   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.003592   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.003606   64502 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-877598 && echo "embed-certs-877598" | sudo tee /etc/hostname
	I0804 00:15:45.142065   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-877598
	
	I0804 00:15:45.142123   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.145427   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.145858   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.145912   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.146133   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.146279   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.146422   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.146595   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.146778   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.146991   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.147007   64502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-877598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-877598/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-877598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:45.275711   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:45.275748   64502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:45.275775   64502 buildroot.go:174] setting up certificates
	I0804 00:15:45.275790   64502 provision.go:84] configureAuth start
	I0804 00:15:45.275804   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:45.276145   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:45.279645   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.280141   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.280166   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.280298   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.283135   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.283495   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.283521   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.283693   64502 provision.go:143] copyHostCerts
	I0804 00:15:45.283754   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:45.283767   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:45.283837   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:45.283954   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:45.283975   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:45.284004   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:45.284168   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:45.284182   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:45.284214   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:45.284280   64502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.embed-certs-877598 san=[127.0.0.1 192.168.50.140 embed-certs-877598 localhost minikube]
	I0804 00:15:45.484805   64502 provision.go:177] copyRemoteCerts
	I0804 00:15:45.484861   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:45.484883   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.488177   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.488586   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.488621   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.488852   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.489032   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.489191   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.489340   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:45.580782   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:45.612118   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 00:15:45.638201   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:15:45.665741   64502 provision.go:87] duration metric: took 389.935703ms to configureAuth
	I0804 00:15:45.665778   64502 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:45.666008   64502 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:45.666110   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.668942   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.669312   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.669343   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.669589   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.669812   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.669995   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.670158   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.670317   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.670501   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.670522   64502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:44.779708   65441 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:15:44.779730   65441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:15:44.779747   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.780637   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.781098   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.781120   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.781219   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.781424   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.781593   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.781753   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.783024   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.783459   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.783479   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.783895   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.784054   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.784219   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.784343   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.793057   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
	I0804 00:15:44.793581   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.794075   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.794094   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.794413   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.794586   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.796274   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.796609   65441 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:15:44.796623   65441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:15:44.796643   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.799445   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.799990   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.800254   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.800698   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.800864   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.800974   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.801305   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.962413   65441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:44.983596   65441 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-969068" to be "Ready" ...
	I0804 00:15:45.057238   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:15:45.057261   65441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:15:45.082722   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:15:45.082745   65441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:15:45.088213   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:15:45.115230   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:15:45.115261   65441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:15:45.115325   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:15:45.164676   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:15:45.502008   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.502040   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.502381   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:45.502440   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.502463   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:45.502476   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.502484   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.502701   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.502718   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:45.510043   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.510065   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.510305   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:45.510353   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.510364   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.217233   65441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.101870491s)
	I0804 00:15:46.217295   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.217308   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.217585   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.217609   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.217625   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.217652   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.217719   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.218073   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.218091   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.218104   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.255756   65441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.091044347s)
	I0804 00:15:46.255802   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.255819   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.256053   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.256093   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.256101   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.256110   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.256117   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.256412   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.256446   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.256454   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.256465   65441 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-969068"
	I0804 00:15:46.258662   65441 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0804 00:15:41.995808   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.496612   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.996566   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.495812   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.996095   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.495902   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.996724   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.495854   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.996354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:46.496185   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.005235   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:44.003809   65087 pod_ready.go:92] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.003847   65087 pod_ready.go:81] duration metric: took 12.006609818s for pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.003861   65087 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.009518   65087 pod_ready.go:92] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.009541   65087 pod_ready.go:81] duration metric: took 5.671724ms for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.009554   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.014897   65087 pod_ready.go:92] pod "kube-apiserver-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.014923   65087 pod_ready.go:81] duration metric: took 5.360171ms for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.014938   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.521943   65087 pod_ready.go:92] pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.521968   65087 pod_ready.go:81] duration metric: took 1.507021563s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.521983   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bcg7" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.527550   65087 pod_ready.go:92] pod "kube-proxy-8bcg7" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.527575   65087 pod_ready.go:81] duration metric: took 5.585026ms for pod "kube-proxy-8bcg7" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.527588   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.604221   65087 pod_ready.go:92] pod "kube-scheduler-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.604245   65087 pod_ready.go:81] duration metric: took 76.648502ms for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.604260   65087 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:46.260578   65441 addons.go:510] duration metric: took 1.531768603s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0804 00:15:46.988351   65441 node_ready.go:53] node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:45.985471   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:45.985501   64502 machine.go:97] duration metric: took 1.107126695s to provisionDockerMachine
	I0804 00:15:45.985514   64502 start.go:293] postStartSetup for "embed-certs-877598" (driver="kvm2")
	I0804 00:15:45.985527   64502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:45.985554   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:45.985928   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:45.985962   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.989294   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.989699   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.989731   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.989875   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.990079   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.990230   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.990355   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.085684   64502 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:46.091660   64502 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:46.091690   64502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:46.091776   64502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:46.091873   64502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:46.092005   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:46.102373   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:46.129547   64502 start.go:296] duration metric: took 144.018823ms for postStartSetup
	I0804 00:15:46.129594   64502 fix.go:56] duration metric: took 20.033890858s for fixHost
	I0804 00:15:46.129619   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.132803   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.133154   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.133190   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.133347   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.133580   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.133766   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.134016   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.134242   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:46.134454   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:46.134471   64502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:46.250499   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730546.219077490
	
	I0804 00:15:46.250528   64502 fix.go:216] guest clock: 1722730546.219077490
	I0804 00:15:46.250539   64502 fix.go:229] Guest: 2024-08-04 00:15:46.21907749 +0000 UTC Remote: 2024-08-04 00:15:46.129599456 +0000 UTC m=+355.401502879 (delta=89.478034ms)
	I0804 00:15:46.250567   64502 fix.go:200] guest clock delta is within tolerance: 89.478034ms
	I0804 00:15:46.250575   64502 start.go:83] releasing machines lock for "embed-certs-877598", held for 20.15490553s
	I0804 00:15:46.250609   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.250902   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:46.253782   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.254164   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.254194   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.254376   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.254967   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.255169   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.255247   64502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:46.255307   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.255376   64502 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:46.255399   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.260113   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260481   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.260511   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260529   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260702   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.260870   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.260995   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.261022   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.261045   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.261182   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.261208   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.261305   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.261451   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.261588   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.372061   64502 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:46.378356   64502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:46.527705   64502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:46.534567   64502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:46.534620   64502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:46.550801   64502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:46.550829   64502 start.go:495] detecting cgroup driver to use...
	I0804 00:15:46.550916   64502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:46.568369   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:46.583437   64502 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:46.583496   64502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:46.599267   64502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:46.614874   64502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:46.734467   64502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:46.900868   64502 docker.go:233] disabling docker service ...
	I0804 00:15:46.900941   64502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:46.915612   64502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:46.929948   64502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:47.056637   64502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:47.175277   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:47.190167   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:47.211062   64502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:15:47.211115   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.222459   64502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:47.222547   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.232964   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.243663   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.254387   64502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:47.266424   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.277323   64502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.296078   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.307058   64502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:47.317138   64502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:47.317223   64502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:47.332104   64502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:47.342965   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:47.464208   64502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:47.620127   64502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:47.620196   64502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:47.625103   64502 start.go:563] Will wait 60s for crictl version
	I0804 00:15:47.625165   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:15:47.628942   64502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:47.668593   64502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:47.668686   64502 ssh_runner.go:195] Run: crio --version
	I0804 00:15:47.700313   64502 ssh_runner.go:195] Run: crio --version
	I0804 00:15:47.737281   64502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:15:47.738730   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:47.741698   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:47.742098   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:47.742144   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:47.742310   64502 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:47.746883   64502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:47.760111   64502 kubeadm.go:883] updating cluster {Name:embed-certs-877598 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:47.760247   64502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:15:47.760305   64502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:47.801700   64502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:15:47.801766   64502 ssh_runner.go:195] Run: which lz4
	I0804 00:15:47.806337   64502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:15:47.811010   64502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:15:47.811050   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:15:49.359157   64502 crio.go:462] duration metric: took 1.552864688s to copy over tarball
	I0804 00:15:49.359245   64502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:15:46.996215   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.496634   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.996278   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.496184   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.996616   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.496240   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.996433   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.996600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:51.496459   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.611474   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:49.611879   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:51.616732   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:48.988818   65441 node_ready.go:53] node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:49.988196   65441 node_ready.go:49] node "default-k8s-diff-port-969068" has status "Ready":"True"
	I0804 00:15:49.988220   65441 node_ready.go:38] duration metric: took 5.004585481s for node "default-k8s-diff-port-969068" to be "Ready" ...
	I0804 00:15:49.988229   65441 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:49.994536   65441 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:50.001200   65441 pod_ready.go:92] pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:50.001229   65441 pod_ready.go:81] duration metric: took 6.665744ms for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:50.001243   65441 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:52.009436   65441 pod_ready.go:102] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:51.759772   64502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.400487256s)
	I0804 00:15:51.759836   64502 crio.go:469] duration metric: took 2.40064418s to extract the tarball
	I0804 00:15:51.759848   64502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:15:51.800038   64502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:51.845098   64502 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:15:51.845124   64502 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:15:51.845134   64502 kubeadm.go:934] updating node { 192.168.50.140 8443 v1.30.3 crio true true} ...
	I0804 00:15:51.845258   64502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-877598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:51.845339   64502 ssh_runner.go:195] Run: crio config
	I0804 00:15:51.895019   64502 cni.go:84] Creating CNI manager for ""
	I0804 00:15:51.895039   64502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:51.895048   64502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:51.895067   64502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.140 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-877598 NodeName:embed-certs-877598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:51.895202   64502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-877598"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:51.895272   64502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:15:51.906363   64502 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:51.906426   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:51.917727   64502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0804 00:15:51.936370   64502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:15:51.953894   64502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0804 00:15:51.972910   64502 ssh_runner.go:195] Run: grep 192.168.50.140	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:51.977288   64502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:51.990992   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:52.115808   64502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:52.133326   64502 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598 for IP: 192.168.50.140
	I0804 00:15:52.133373   64502 certs.go:194] generating shared ca certs ...
	I0804 00:15:52.133396   64502 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:52.133564   64502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:52.133613   64502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:52.133628   64502 certs.go:256] generating profile certs ...
	I0804 00:15:52.133736   64502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/client.key
	I0804 00:15:52.133824   64502 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.key.5511d337
	I0804 00:15:52.133873   64502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.key
	I0804 00:15:52.134013   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:52.134077   64502 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:52.134091   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:52.134130   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:52.134168   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:52.134200   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:52.134256   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:52.134880   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:52.175985   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:52.209458   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:52.239097   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:52.271037   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0804 00:15:52.317594   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:52.353485   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:52.382159   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:15:52.407478   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:52.433103   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:52.457233   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:52.481534   64502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:52.500482   64502 ssh_runner.go:195] Run: openssl version
	I0804 00:15:52.509021   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:52.522431   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.527125   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.527184   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.533310   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:52.546085   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:52.557781   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.562516   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.562587   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.568403   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:52.580431   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:52.592706   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.597280   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.597382   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.603284   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:52.616100   64502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:52.621422   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:52.631811   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:52.639130   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:52.646159   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:52.652721   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:52.659459   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:52.665864   64502 kubeadm.go:392] StartCluster: {Name:embed-certs-877598 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:52.665991   64502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:52.666044   64502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:52.711272   64502 cri.go:89] found id: ""
	I0804 00:15:52.711346   64502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:52.722294   64502 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:52.722321   64502 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:52.722380   64502 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:52.733148   64502 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:52.734706   64502 kubeconfig.go:125] found "embed-certs-877598" server: "https://192.168.50.140:8443"
	I0804 00:15:52.737995   64502 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:52.749941   64502 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.140
	I0804 00:15:52.749986   64502 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:52.749998   64502 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:52.750043   64502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:52.793295   64502 cri.go:89] found id: ""
	I0804 00:15:52.793388   64502 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:52.811438   64502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:52.824055   64502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:52.824080   64502 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:52.824128   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:15:52.835393   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:52.835446   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:52.846732   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:15:52.856889   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:52.856942   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:52.869951   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:15:52.881836   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:52.881909   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:52.894121   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:15:52.905643   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:52.905711   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:52.917063   64502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:52.929399   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:53.132145   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.096969   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.325640   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.385886   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.472926   64502 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:54.473002   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.973406   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.473410   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.578082   64502 api_server.go:72] duration metric: took 1.105154357s to wait for apiserver process to appear ...
	I0804 00:15:55.578170   64502 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:55.578207   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:55.578847   64502 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I0804 00:15:51.996447   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.496265   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.996030   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.996313   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.495823   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.996360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.496652   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.996049   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:55.996141   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:56.045001   64758 cri.go:89] found id: ""
	I0804 00:15:56.045031   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.045042   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:56.045049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:56.045114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:56.086505   64758 cri.go:89] found id: ""
	I0804 00:15:56.086535   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.086547   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:56.086554   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:56.086618   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:56.125953   64758 cri.go:89] found id: ""
	I0804 00:15:56.125983   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.125994   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:56.126001   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:56.126060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:56.167313   64758 cri.go:89] found id: ""
	I0804 00:15:56.167343   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.167354   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:56.167361   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:56.167424   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:56.211102   64758 cri.go:89] found id: ""
	I0804 00:15:56.211132   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.211142   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:56.211149   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:56.211231   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:56.246894   64758 cri.go:89] found id: ""
	I0804 00:15:56.246926   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.246937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:56.246945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:56.247012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:56.281952   64758 cri.go:89] found id: ""
	I0804 00:15:56.281980   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.281991   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:56.281998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:56.282060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:56.317685   64758 cri.go:89] found id: ""
	I0804 00:15:56.317719   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.317733   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:56.317745   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:56.317762   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:56.335040   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:56.335069   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:56.475995   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:56.476017   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:56.476033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:56.567508   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:56.567551   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:56.618136   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:56.618166   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:54.112928   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:56.112987   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:54.179330   65441 pod_ready.go:102] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:54.789712   65441 pod_ready.go:92] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.789738   65441 pod_ready.go:81] duration metric: took 4.788487591s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.789749   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.799762   65441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.799785   65441 pod_ready.go:81] duration metric: took 10.029856ms for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.799795   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.805685   65441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.805708   65441 pod_ready.go:81] duration metric: took 5.905108ms for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.805718   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.809797   65441 pod_ready.go:92] pod "kube-proxy-zz7fr" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.809818   65441 pod_ready.go:81] duration metric: took 4.094183ms for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.809827   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.820536   65441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.820557   65441 pod_ready.go:81] duration metric: took 10.722903ms for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.820567   65441 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:56.827543   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:56.078916   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:58.738609   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:58.738641   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:58.738657   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:58.772665   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:58.772695   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:59.079121   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:59.083798   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:59.083829   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:59.579242   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:59.585343   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:59.585381   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:00.078877   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:00.099981   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:00.100022   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:00.578505   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:00.582665   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:00.582692   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:59.172886   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:59.187045   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:59.187128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:59.225135   64758 cri.go:89] found id: ""
	I0804 00:15:59.225164   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.225173   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:59.225179   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:59.225255   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:59.262538   64758 cri.go:89] found id: ""
	I0804 00:15:59.262566   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.262573   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:59.262578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:59.262635   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:59.301665   64758 cri.go:89] found id: ""
	I0804 00:15:59.301697   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.301708   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:59.301715   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:59.301778   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:59.362742   64758 cri.go:89] found id: ""
	I0804 00:15:59.362766   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.362774   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:59.362779   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:59.362834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:59.404398   64758 cri.go:89] found id: ""
	I0804 00:15:59.404431   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.404509   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:59.404525   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:59.404594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:59.454257   64758 cri.go:89] found id: ""
	I0804 00:15:59.454285   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.454297   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:59.454305   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:59.454363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:59.496790   64758 cri.go:89] found id: ""
	I0804 00:15:59.496818   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.496829   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:59.496837   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:59.496896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:59.537395   64758 cri.go:89] found id: ""
	I0804 00:15:59.537424   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.537431   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:59.537439   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:59.537453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:59.600005   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:59.600042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:59.617304   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:59.617336   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:59.692828   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:59.692849   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:59.692864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:59.764000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:59.764038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:58.611600   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.110986   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.079326   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:01.083661   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:01.083689   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:01.578711   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:01.583011   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:01.583040   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:02.078606   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:02.083234   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0804 00:16:02.090079   64502 api_server.go:141] control plane version: v1.30.3
	I0804 00:16:02.090112   64502 api_server.go:131] duration metric: took 6.511921332s to wait for apiserver health ...
	I0804 00:16:02.090123   64502 cni.go:84] Creating CNI manager for ""
	I0804 00:16:02.090132   64502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:16:02.092169   64502 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:58.829268   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.327623   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:02.093704   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:16:02.109001   64502 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:16:02.131996   64502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:16:02.145300   64502 system_pods.go:59] 8 kube-system pods found
	I0804 00:16:02.145333   64502 system_pods.go:61] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:16:02.145340   64502 system_pods.go:61] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:16:02.145348   64502 system_pods.go:61] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:16:02.145370   64502 system_pods.go:61] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:16:02.145380   64502 system_pods.go:61] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:16:02.145389   64502 system_pods.go:61] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:16:02.145397   64502 system_pods.go:61] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:16:02.145403   64502 system_pods.go:61] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:16:02.145412   64502 system_pods.go:74] duration metric: took 13.393537ms to wait for pod list to return data ...
	I0804 00:16:02.145425   64502 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:16:02.149623   64502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:16:02.149651   64502 node_conditions.go:123] node cpu capacity is 2
	I0804 00:16:02.149661   64502 node_conditions.go:105] duration metric: took 4.231097ms to run NodePressure ...
	I0804 00:16:02.149677   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:16:02.424261   64502 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:16:02.429537   64502 kubeadm.go:739] kubelet initialised
	I0804 00:16:02.429555   64502 kubeadm.go:740] duration metric: took 5.269005ms waiting for restarted kubelet to initialise ...
	I0804 00:16:02.429563   64502 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:02.435433   64502 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.440580   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.440606   64502 pod_ready.go:81] duration metric: took 5.145511ms for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.440619   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.440628   64502 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.445111   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "etcd-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.445136   64502 pod_ready.go:81] duration metric: took 4.497361ms for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.445148   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "etcd-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.445157   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.450172   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.450200   64502 pod_ready.go:81] duration metric: took 5.032514ms for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.450211   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.450219   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.536314   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.536386   64502 pod_ready.go:81] duration metric: took 86.155481ms for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.536398   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.536409   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.935794   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-proxy-wk8zf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.935830   64502 pod_ready.go:81] duration metric: took 399.405535ms for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.935842   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-proxy-wk8zf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.935861   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:03.335730   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.335760   64502 pod_ready.go:81] duration metric: took 399.889478ms for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:03.335772   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.335780   64502 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:03.735762   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.735786   64502 pod_ready.go:81] duration metric: took 399.996995ms for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:03.735795   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.735802   64502 pod_ready.go:38] duration metric: took 1.306222891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:03.735818   64502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:16:03.748578   64502 ops.go:34] apiserver oom_adj: -16
	I0804 00:16:03.748602   64502 kubeadm.go:597] duration metric: took 11.026274037s to restartPrimaryControlPlane
	I0804 00:16:03.748611   64502 kubeadm.go:394] duration metric: took 11.082760058s to StartCluster
	I0804 00:16:03.748637   64502 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:16:03.748719   64502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:16:03.750554   64502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:16:03.750824   64502 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:16:03.750900   64502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:16:03.750998   64502 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-877598"
	I0804 00:16:03.751041   64502 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-877598"
	W0804 00:16:03.751053   64502 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:16:03.751051   64502 addons.go:69] Setting default-storageclass=true in profile "embed-certs-877598"
	I0804 00:16:03.751072   64502 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:16:03.751108   64502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-877598"
	I0804 00:16:03.751063   64502 addons.go:69] Setting metrics-server=true in profile "embed-certs-877598"
	I0804 00:16:03.751181   64502 addons.go:234] Setting addon metrics-server=true in "embed-certs-877598"
	W0804 00:16:03.751196   64502 addons.go:243] addon metrics-server should already be in state true
	I0804 00:16:03.751245   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.751467   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.751503   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.751540   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.751612   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.751088   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.751990   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.752017   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.752817   64502 out.go:177] * Verifying Kubernetes components...
	I0804 00:16:03.754613   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:16:03.769684   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I0804 00:16:03.769701   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37925
	I0804 00:16:03.769697   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0804 00:16:03.770197   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770332   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770619   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770792   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.770808   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.770935   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.770949   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.771125   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.771327   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.771520   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.771545   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.771555   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.771938   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.772138   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.772195   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.772521   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.772565   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.776267   64502 addons.go:234] Setting addon default-storageclass=true in "embed-certs-877598"
	W0804 00:16:03.776292   64502 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:16:03.776327   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.776695   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.776738   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.789183   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0804 00:16:03.789660   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.789796   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I0804 00:16:03.790184   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.790202   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.790246   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.790608   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.790869   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.790900   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.790985   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.791276   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.791519   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.793005   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.793338   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.795747   64502 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:16:03.795748   64502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:16:03.796208   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0804 00:16:03.796652   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.797194   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.797220   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.797589   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:16:03.797611   64502 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:16:03.797632   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.797640   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.797673   64502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:16:03.797684   64502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:16:03.797697   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.798266   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.798311   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.801933   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802083   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802417   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.802445   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802589   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.802766   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.802851   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.802868   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802936   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.803140   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.803166   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.803310   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.803409   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.803512   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.818073   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0804 00:16:03.818647   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.819107   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.819130   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.819488   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.819721   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.821982   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.822216   64502 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:16:03.822232   64502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:16:03.822251   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.825593   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.826055   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.826090   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.826356   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.826526   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.826667   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.826829   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.955019   64502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:16:03.976453   64502 node_ready.go:35] waiting up to 6m0s for node "embed-certs-877598" to be "Ready" ...
	I0804 00:16:04.051717   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:16:04.074720   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:16:04.074740   64502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:16:04.099578   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:16:04.099606   64502 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:16:04.118348   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:16:04.163390   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:16:04.163418   64502 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:16:04.227379   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:16:05.143364   64502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091613097s)
	I0804 00:16:05.143418   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143419   64502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.025041953s)
	I0804 00:16:05.143430   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143439   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143449   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143726   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.143743   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.143755   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143764   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143862   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.143893   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.143915   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.143935   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143964   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.144014   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.144033   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.144085   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.144259   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.144305   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.144319   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.150739   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.150761   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.151073   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.151102   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.151130   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.169806   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.169832   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.170103   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.170122   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.170148   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.170159   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.170171   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.170461   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.170546   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.170563   64502 addons.go:475] Verifying addon metrics-server=true in "embed-certs-877598"
	I0804 00:16:05.172575   64502 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0804 00:16:05.173964   64502 addons.go:510] duration metric: took 1.423065893s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0804 00:16:02.307325   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:02.324168   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:02.324233   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:02.370204   64758 cri.go:89] found id: ""
	I0804 00:16:02.370234   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.370250   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:02.370258   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:02.370325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:02.405586   64758 cri.go:89] found id: ""
	I0804 00:16:02.405616   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.405628   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:02.405636   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:02.405694   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:02.445644   64758 cri.go:89] found id: ""
	I0804 00:16:02.445665   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.445675   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:02.445682   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:02.445739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:02.483659   64758 cri.go:89] found id: ""
	I0804 00:16:02.483686   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.483695   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:02.483701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:02.483751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:02.519903   64758 cri.go:89] found id: ""
	I0804 00:16:02.519929   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.519938   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:02.519944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:02.519991   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:02.557373   64758 cri.go:89] found id: ""
	I0804 00:16:02.557401   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.557410   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:02.557416   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:02.557472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:02.594203   64758 cri.go:89] found id: ""
	I0804 00:16:02.594238   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.594249   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:02.594256   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:02.594316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:02.635487   64758 cri.go:89] found id: ""
	I0804 00:16:02.635512   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.635520   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:02.635529   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:02.635543   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:02.686990   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:02.687035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:02.701784   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:02.701810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:02.778626   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:02.778648   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:02.778662   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:02.856056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:02.856097   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:05.402858   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:05.418825   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:05.418900   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:05.458789   64758 cri.go:89] found id: ""
	I0804 00:16:05.458872   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.458887   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:05.458895   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:05.458967   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:05.498258   64758 cri.go:89] found id: ""
	I0804 00:16:05.498284   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.498295   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:05.498302   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:05.498364   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:05.540892   64758 cri.go:89] found id: ""
	I0804 00:16:05.540919   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.540927   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:05.540933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:05.540992   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:05.578876   64758 cri.go:89] found id: ""
	I0804 00:16:05.578911   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.578919   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:05.578924   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:05.578971   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:05.616248   64758 cri.go:89] found id: ""
	I0804 00:16:05.616272   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.616280   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:05.616285   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:05.616339   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:05.654387   64758 cri.go:89] found id: ""
	I0804 00:16:05.654419   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.654428   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:05.654436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:05.654528   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:05.695579   64758 cri.go:89] found id: ""
	I0804 00:16:05.695613   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.695625   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:05.695669   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:05.695752   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:05.740754   64758 cri.go:89] found id: ""
	I0804 00:16:05.740777   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.740785   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:05.740793   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:05.740805   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:05.792091   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:05.792126   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:05.809130   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:05.809164   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:05.888441   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:05.888465   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:05.888479   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:05.969336   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:05.969390   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:03.111834   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:05.613749   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:03.830570   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:06.328076   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:05.980692   64502 node_ready.go:53] node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:08.480205   64502 node_ready.go:53] node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:09.480127   64502 node_ready.go:49] node "embed-certs-877598" has status "Ready":"True"
	I0804 00:16:09.480147   64502 node_ready.go:38] duration metric: took 5.503660587s for node "embed-certs-877598" to be "Ready" ...
	I0804 00:16:09.480155   64502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:09.485704   64502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:09.491316   64502 pod_ready.go:92] pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:09.491340   64502 pod_ready.go:81] duration metric: took 5.611918ms for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:09.491348   64502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:08.514981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:08.531117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:08.531188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:08.569167   64758 cri.go:89] found id: ""
	I0804 00:16:08.569199   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.569210   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:08.569218   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:08.569282   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:08.608478   64758 cri.go:89] found id: ""
	I0804 00:16:08.608559   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.608572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:08.608580   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:08.608636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:08.645939   64758 cri.go:89] found id: ""
	I0804 00:16:08.645972   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.645983   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:08.645990   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:08.646050   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:08.685274   64758 cri.go:89] found id: ""
	I0804 00:16:08.685305   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.685316   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:08.685324   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:08.685400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:08.722314   64758 cri.go:89] found id: ""
	I0804 00:16:08.722345   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.722357   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:08.722363   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:08.722427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:08.758577   64758 cri.go:89] found id: ""
	I0804 00:16:08.758606   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.758617   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:08.758624   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:08.758685   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.798734   64758 cri.go:89] found id: ""
	I0804 00:16:08.798761   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.798773   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:08.798781   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:08.798842   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:08.837577   64758 cri.go:89] found id: ""
	I0804 00:16:08.837600   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.837608   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:08.837616   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:08.837627   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:08.894426   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:08.894465   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:08.909851   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:08.909879   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:08.989858   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:08.989878   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:08.989893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:09.081056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:09.081098   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:11.627914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:11.641805   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:11.641896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:11.679002   64758 cri.go:89] found id: ""
	I0804 00:16:11.679028   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.679036   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:11.679042   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:11.679090   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:11.720188   64758 cri.go:89] found id: ""
	I0804 00:16:11.720220   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.720236   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:11.720245   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:11.720307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:11.760085   64758 cri.go:89] found id: ""
	I0804 00:16:11.760118   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.760130   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:11.760138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:11.760198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:11.796220   64758 cri.go:89] found id: ""
	I0804 00:16:11.796249   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.796266   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:11.796274   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:11.796335   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:11.834216   64758 cri.go:89] found id: ""
	I0804 00:16:11.834243   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.834253   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:11.834260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:11.834336   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:11.869205   64758 cri.go:89] found id: ""
	I0804 00:16:11.869230   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.869237   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:11.869243   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:11.869301   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.110499   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:10.618011   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:08.827284   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:10.828942   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:11.498264   64502 pod_ready.go:102] pod "etcd-embed-certs-877598" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:12.498916   64502 pod_ready.go:92] pod "etcd-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:12.498949   64502 pod_ready.go:81] duration metric: took 3.007593153s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:12.498961   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.562862   64502 pod_ready.go:92] pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.562896   64502 pod_ready.go:81] duration metric: took 2.063926324s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.562910   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.573628   64502 pod_ready.go:92] pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.573655   64502 pod_ready.go:81] duration metric: took 10.735916ms for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.573670   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.583241   64502 pod_ready.go:92] pod "kube-proxy-wk8zf" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.583266   64502 pod_ready.go:81] duration metric: took 9.588875ms for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.583278   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.593419   64502 pod_ready.go:92] pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.593445   64502 pod_ready.go:81] duration metric: took 10.158665ms for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.593457   64502 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:11.912091   64758 cri.go:89] found id: ""
	I0804 00:16:11.912120   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.912132   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:11.912145   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:11.912203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:11.949570   64758 cri.go:89] found id: ""
	I0804 00:16:11.949603   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.949614   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:11.949625   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:11.949643   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:12.006542   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:12.006575   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:12.022435   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:12.022474   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:12.101007   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:12.101032   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:12.101057   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:12.183836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:12.183876   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:14.725345   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:14.738389   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:14.738464   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:14.780103   64758 cri.go:89] found id: ""
	I0804 00:16:14.780133   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.780142   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:14.780147   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:14.780197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:14.817811   64758 cri.go:89] found id: ""
	I0804 00:16:14.817847   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.817863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:14.817872   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:14.817946   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:14.854450   64758 cri.go:89] found id: ""
	I0804 00:16:14.854478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.854488   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:14.854495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:14.854561   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:14.891862   64758 cri.go:89] found id: ""
	I0804 00:16:14.891891   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.891900   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:14.891905   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:14.891958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:14.928450   64758 cri.go:89] found id: ""
	I0804 00:16:14.928478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.928488   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:14.928495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:14.928554   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:14.965820   64758 cri.go:89] found id: ""
	I0804 00:16:14.965848   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.965860   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:14.965867   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:14.965945   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:15.008725   64758 cri.go:89] found id: ""
	I0804 00:16:15.008874   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.008888   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:15.008897   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:15.008957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:15.044618   64758 cri.go:89] found id: ""
	I0804 00:16:15.044768   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.044792   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:15.044802   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:15.044815   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:15.102786   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:15.102825   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:15.118305   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:15.118347   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:15.196397   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:15.196420   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:15.196435   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:15.277941   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:15.277986   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:13.110969   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:15.112546   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:13.327840   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:15.826447   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:16.600315   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.099064   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:17.819354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:17.834271   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:17.834332   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:17.870930   64758 cri.go:89] found id: ""
	I0804 00:16:17.870961   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.870973   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:17.870980   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:17.871040   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:17.907980   64758 cri.go:89] found id: ""
	I0804 00:16:17.908007   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.908016   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:17.908021   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:17.908067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:17.943257   64758 cri.go:89] found id: ""
	I0804 00:16:17.943284   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.943295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:17.943301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:17.943363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:17.982297   64758 cri.go:89] found id: ""
	I0804 00:16:17.982328   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.982338   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:17.982345   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:17.982405   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:18.022780   64758 cri.go:89] found id: ""
	I0804 00:16:18.022810   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.022841   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:18.022850   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:18.022913   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:18.061891   64758 cri.go:89] found id: ""
	I0804 00:16:18.061926   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.061937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:18.061945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:18.062012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:18.100807   64758 cri.go:89] found id: ""
	I0804 00:16:18.100845   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.100855   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:18.100862   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:18.100917   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:18.142011   64758 cri.go:89] found id: ""
	I0804 00:16:18.142044   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.142056   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:18.142066   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:18.142090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:18.195476   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:18.195511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:18.209661   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:18.209690   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:18.282638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:18.282657   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:18.282669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:18.363900   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:18.363938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:20.908753   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:20.922878   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:20.922962   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:20.961013   64758 cri.go:89] found id: ""
	I0804 00:16:20.961041   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.961052   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:20.961058   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:20.961109   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:20.998027   64758 cri.go:89] found id: ""
	I0804 00:16:20.998059   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.998068   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:20.998074   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:20.998121   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:21.035640   64758 cri.go:89] found id: ""
	I0804 00:16:21.035669   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.035680   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:21.035688   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:21.035751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:21.075737   64758 cri.go:89] found id: ""
	I0804 00:16:21.075770   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.075779   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:21.075786   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:21.075846   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:21.120024   64758 cri.go:89] found id: ""
	I0804 00:16:21.120046   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.120054   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:21.120061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:21.120126   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:21.160796   64758 cri.go:89] found id: ""
	I0804 00:16:21.160821   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.160840   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:21.160847   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:21.160907   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:21.195519   64758 cri.go:89] found id: ""
	I0804 00:16:21.195547   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.195558   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:21.195566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:21.195629   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:21.236193   64758 cri.go:89] found id: ""
	I0804 00:16:21.236222   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.236232   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:21.236243   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:21.236258   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:21.295154   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:21.295198   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:21.309540   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:21.309566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:21.389391   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:21.389416   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:21.389433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:21.472771   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:21.472808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:17.611366   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.612092   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:17.827036   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.827655   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:21.828026   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:21.101899   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:23.601687   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.018923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:24.032954   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:24.033013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:24.073677   64758 cri.go:89] found id: ""
	I0804 00:16:24.073703   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.073711   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:24.073716   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:24.073777   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:24.115752   64758 cri.go:89] found id: ""
	I0804 00:16:24.115775   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.115785   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:24.115792   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:24.115849   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:24.152967   64758 cri.go:89] found id: ""
	I0804 00:16:24.153001   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.153017   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:24.153024   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:24.153098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:24.190557   64758 cri.go:89] found id: ""
	I0804 00:16:24.190581   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.190589   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:24.190595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:24.190643   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:24.229312   64758 cri.go:89] found id: ""
	I0804 00:16:24.229341   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.229351   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:24.229373   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:24.229437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:24.265076   64758 cri.go:89] found id: ""
	I0804 00:16:24.265100   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.265107   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:24.265113   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:24.265167   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:24.306508   64758 cri.go:89] found id: ""
	I0804 00:16:24.306534   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.306542   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:24.306547   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:24.306598   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:24.350714   64758 cri.go:89] found id: ""
	I0804 00:16:24.350747   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.350759   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:24.350770   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:24.350785   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:24.366188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:24.366216   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:24.438410   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:24.438431   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:24.438447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:24.522635   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:24.522669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:24.562647   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:24.562678   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:22.110420   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.111399   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.613839   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.327982   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.826914   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.099435   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:28.099896   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:30.100659   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:27.119437   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:27.133330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:27.133426   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:27.170001   64758 cri.go:89] found id: ""
	I0804 00:16:27.170039   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.170048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:27.170054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:27.170112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:27.205811   64758 cri.go:89] found id: ""
	I0804 00:16:27.205843   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.205854   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:27.205861   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:27.205922   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:27.247249   64758 cri.go:89] found id: ""
	I0804 00:16:27.247278   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.247287   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:27.247294   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:27.247360   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:27.285659   64758 cri.go:89] found id: ""
	I0804 00:16:27.285688   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.285697   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:27.285703   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:27.285774   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:27.321039   64758 cri.go:89] found id: ""
	I0804 00:16:27.321066   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.321075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:27.321084   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:27.321130   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:27.359947   64758 cri.go:89] found id: ""
	I0804 00:16:27.359977   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.359988   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:27.359996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:27.360056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:27.401408   64758 cri.go:89] found id: ""
	I0804 00:16:27.401432   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.401440   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:27.401449   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:27.401495   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:27.437297   64758 cri.go:89] found id: ""
	I0804 00:16:27.437326   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.437337   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:27.437347   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:27.437373   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:27.490594   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:27.490639   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:27.505993   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:27.506021   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:27.588779   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:27.588804   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:27.588820   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:27.681557   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:27.681592   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.225062   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:30.239475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:30.239540   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:30.283896   64758 cri.go:89] found id: ""
	I0804 00:16:30.283923   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.283931   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:30.283938   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:30.284013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:30.321506   64758 cri.go:89] found id: ""
	I0804 00:16:30.321532   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.321539   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:30.321545   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:30.321593   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:30.358314   64758 cri.go:89] found id: ""
	I0804 00:16:30.358340   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.358347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:30.358353   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:30.358400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:30.393561   64758 cri.go:89] found id: ""
	I0804 00:16:30.393587   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.393595   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:30.393600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:30.393646   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:30.429907   64758 cri.go:89] found id: ""
	I0804 00:16:30.429935   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.429943   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:30.429949   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:30.430008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:30.466305   64758 cri.go:89] found id: ""
	I0804 00:16:30.466332   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.466342   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:30.466350   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:30.466408   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:30.505384   64758 cri.go:89] found id: ""
	I0804 00:16:30.505413   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.505424   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:30.505431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:30.505492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:30.541756   64758 cri.go:89] found id: ""
	I0804 00:16:30.541786   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.541796   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:30.541806   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:30.541821   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:30.555516   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:30.555554   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:30.627442   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:30.627463   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:30.627473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:30.701452   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:30.701489   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.743436   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:30.743473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:29.111149   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:31.111470   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:29.327268   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:31.328424   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:32.605884   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:34.608119   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:33.298898   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:33.315211   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:33.315292   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:33.353171   64758 cri.go:89] found id: ""
	I0804 00:16:33.353207   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.353220   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:33.353229   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:33.353297   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:33.389767   64758 cri.go:89] found id: ""
	I0804 00:16:33.389792   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.389799   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:33.389805   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:33.389851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:33.446889   64758 cri.go:89] found id: ""
	I0804 00:16:33.446928   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.446939   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:33.446946   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:33.447004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:33.487340   64758 cri.go:89] found id: ""
	I0804 00:16:33.487362   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.487370   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:33.487376   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:33.487423   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:33.530398   64758 cri.go:89] found id: ""
	I0804 00:16:33.530421   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.530429   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:33.530435   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:33.530483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:33.568725   64758 cri.go:89] found id: ""
	I0804 00:16:33.568753   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.568762   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:33.568769   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:33.568818   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:33.607205   64758 cri.go:89] found id: ""
	I0804 00:16:33.607232   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.607242   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:33.607249   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:33.607311   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:33.648188   64758 cri.go:89] found id: ""
	I0804 00:16:33.648220   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.648230   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:33.648240   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:33.648256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:33.700231   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:33.700266   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:33.714899   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:33.714932   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:33.794306   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:33.794326   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:33.794340   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.872446   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:33.872482   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.415000   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:36.428920   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:36.428996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:36.464784   64758 cri.go:89] found id: ""
	I0804 00:16:36.464810   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.464817   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:36.464823   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:36.464925   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:36.501394   64758 cri.go:89] found id: ""
	I0804 00:16:36.501423   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.501431   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:36.501437   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:36.501497   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:36.537049   64758 cri.go:89] found id: ""
	I0804 00:16:36.537079   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.537090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:36.537102   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:36.537173   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:36.573956   64758 cri.go:89] found id: ""
	I0804 00:16:36.573986   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.573997   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:36.574004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:36.574065   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:36.612996   64758 cri.go:89] found id: ""
	I0804 00:16:36.613016   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.613023   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:36.613029   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:36.613083   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:36.652346   64758 cri.go:89] found id: ""
	I0804 00:16:36.652367   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.652374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:36.652380   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:36.652437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:36.690073   64758 cri.go:89] found id: ""
	I0804 00:16:36.690100   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.690110   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:36.690119   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:36.690182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:36.732436   64758 cri.go:89] found id: ""
	I0804 00:16:36.732466   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.732477   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:36.732487   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:36.732505   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:36.746036   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:36.746060   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:36.818141   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:36.818164   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:36.818179   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.611181   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:35.611691   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:33.329719   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:35.330172   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:37.100705   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:39.603600   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:36.907689   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:36.907732   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.947104   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:36.947135   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.502960   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:39.516340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:39.516414   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:39.555903   64758 cri.go:89] found id: ""
	I0804 00:16:39.555929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.555939   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:39.555946   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:39.556004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:39.599791   64758 cri.go:89] found id: ""
	I0804 00:16:39.599816   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.599827   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:39.599834   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:39.599894   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:39.642903   64758 cri.go:89] found id: ""
	I0804 00:16:39.642929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.642936   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:39.642944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:39.643004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:39.678667   64758 cri.go:89] found id: ""
	I0804 00:16:39.678693   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.678702   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:39.678709   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:39.678757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:39.716888   64758 cri.go:89] found id: ""
	I0804 00:16:39.716916   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.716926   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:39.716933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:39.717001   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:39.751576   64758 cri.go:89] found id: ""
	I0804 00:16:39.751602   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.751610   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:39.751616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:39.751664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:39.794026   64758 cri.go:89] found id: ""
	I0804 00:16:39.794056   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.794067   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:39.794087   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:39.794158   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:39.841426   64758 cri.go:89] found id: ""
	I0804 00:16:39.841454   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.841464   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:39.841474   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:39.841492   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.902579   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:39.902616   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:39.924467   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:39.924495   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:40.001318   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:40.001345   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:40.001377   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:40.081520   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:40.081552   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:38.111443   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:40.610810   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:37.827851   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:39.828752   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.327716   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.100037   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:44.100850   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.623094   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:42.636523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:42.636594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:42.674188   64758 cri.go:89] found id: ""
	I0804 00:16:42.674218   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.674226   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:42.674231   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:42.674277   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:42.708496   64758 cri.go:89] found id: ""
	I0804 00:16:42.708522   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.708532   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:42.708539   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:42.708601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:42.751050   64758 cri.go:89] found id: ""
	I0804 00:16:42.751087   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.751100   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:42.751107   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:42.751170   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:42.788520   64758 cri.go:89] found id: ""
	I0804 00:16:42.788546   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.788555   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:42.788560   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:42.788619   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:42.828273   64758 cri.go:89] found id: ""
	I0804 00:16:42.828297   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.828304   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:42.828309   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:42.828356   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:42.867754   64758 cri.go:89] found id: ""
	I0804 00:16:42.867784   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.867799   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:42.867807   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:42.867864   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:42.903945   64758 cri.go:89] found id: ""
	I0804 00:16:42.903977   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.903988   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:42.903996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:42.904059   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:42.942477   64758 cri.go:89] found id: ""
	I0804 00:16:42.942518   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.942539   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:42.942549   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:42.942565   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:42.981776   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:42.981810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:43.037601   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:43.037634   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:43.052719   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:43.052746   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:43.122664   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:43.122688   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:43.122702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:45.701275   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:45.714532   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:45.714607   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:45.750932   64758 cri.go:89] found id: ""
	I0804 00:16:45.750955   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.750986   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:45.750991   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:45.751042   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:45.787348   64758 cri.go:89] found id: ""
	I0804 00:16:45.787373   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.787381   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:45.787387   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:45.787441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:45.823390   64758 cri.go:89] found id: ""
	I0804 00:16:45.823419   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.823429   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:45.823436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:45.823498   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:45.861400   64758 cri.go:89] found id: ""
	I0804 00:16:45.861430   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.861440   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:45.861448   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:45.861508   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:45.898992   64758 cri.go:89] found id: ""
	I0804 00:16:45.899024   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.899036   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:45.899043   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:45.899110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:45.934542   64758 cri.go:89] found id: ""
	I0804 00:16:45.934570   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.934582   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:45.934589   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:45.934648   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:45.967908   64758 cri.go:89] found id: ""
	I0804 00:16:45.967938   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.967949   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:45.967957   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:45.968018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:46.006475   64758 cri.go:89] found id: ""
	I0804 00:16:46.006504   64758 logs.go:276] 0 containers: []
	W0804 00:16:46.006516   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:46.006526   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:46.006541   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:46.058760   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:46.058793   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:46.074753   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:46.074777   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:46.149634   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:46.149655   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:46.149671   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:46.230104   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:46.230140   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:43.111492   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:45.611224   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:44.827683   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:47.326999   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:46.600307   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:49.100532   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:48.772224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:48.785848   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:48.785935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.825206   64758 cri.go:89] found id: ""
	I0804 00:16:48.825232   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.825242   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:48.825249   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:48.825315   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:48.861559   64758 cri.go:89] found id: ""
	I0804 00:16:48.861588   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.861599   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:48.861607   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:48.861675   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:48.903375   64758 cri.go:89] found id: ""
	I0804 00:16:48.903401   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.903412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:48.903419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:48.903480   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:48.940708   64758 cri.go:89] found id: ""
	I0804 00:16:48.940736   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.940748   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:48.940755   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:48.940817   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:48.976190   64758 cri.go:89] found id: ""
	I0804 00:16:48.976218   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.976228   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:48.976236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:48.976291   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:49.010393   64758 cri.go:89] found id: ""
	I0804 00:16:49.010423   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.010434   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:49.010442   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:49.010506   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:49.046670   64758 cri.go:89] found id: ""
	I0804 00:16:49.046698   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.046707   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:49.046711   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:49.046759   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:49.085254   64758 cri.go:89] found id: ""
	I0804 00:16:49.085284   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.085293   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:49.085302   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:49.085314   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:49.142402   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:49.142433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:49.157063   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:49.157092   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:49.233808   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:49.233829   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:49.233841   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:49.320355   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:49.320395   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:51.862548   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:51.875679   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:51.875750   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.110954   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:50.111867   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:49.327109   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.327920   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.600258   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:53.601052   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.911400   64758 cri.go:89] found id: ""
	I0804 00:16:51.911427   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.911437   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:51.911444   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:51.911505   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:51.948825   64758 cri.go:89] found id: ""
	I0804 00:16:51.948853   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.948863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:51.948870   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:51.948935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:51.989458   64758 cri.go:89] found id: ""
	I0804 00:16:51.989488   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.989499   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:51.989506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:51.989568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:52.026663   64758 cri.go:89] found id: ""
	I0804 00:16:52.026685   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.026693   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:52.026698   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:52.026754   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:52.066089   64758 cri.go:89] found id: ""
	I0804 00:16:52.066115   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.066127   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:52.066135   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:52.066198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:52.102159   64758 cri.go:89] found id: ""
	I0804 00:16:52.102185   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.102196   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:52.102203   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:52.102258   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:52.144239   64758 cri.go:89] found id: ""
	I0804 00:16:52.144266   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.144276   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:52.144283   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:52.144344   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:52.180679   64758 cri.go:89] found id: ""
	I0804 00:16:52.180708   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.180717   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:52.180725   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:52.180738   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:52.262074   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:52.262116   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.305913   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:52.305948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:52.357044   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:52.357081   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:52.372090   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:52.372119   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:52.444148   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:54.944910   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:54.958182   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:54.958239   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:54.993629   64758 cri.go:89] found id: ""
	I0804 00:16:54.993657   64758 logs.go:276] 0 containers: []
	W0804 00:16:54.993668   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:54.993675   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:54.993734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:55.029270   64758 cri.go:89] found id: ""
	I0804 00:16:55.029299   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.029310   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:55.029317   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:55.029393   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:55.067923   64758 cri.go:89] found id: ""
	I0804 00:16:55.067951   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.067961   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:55.067968   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:55.068027   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:55.107533   64758 cri.go:89] found id: ""
	I0804 00:16:55.107556   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.107565   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:55.107572   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:55.107633   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:55.143828   64758 cri.go:89] found id: ""
	I0804 00:16:55.143856   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.143868   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:55.143875   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:55.143940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:55.177960   64758 cri.go:89] found id: ""
	I0804 00:16:55.178015   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.178030   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:55.178038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:55.178112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:55.217457   64758 cri.go:89] found id: ""
	I0804 00:16:55.217481   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.217488   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:55.217494   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:55.217538   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:55.259862   64758 cri.go:89] found id: ""
	I0804 00:16:55.259890   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.259898   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:55.259907   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:55.259918   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:55.311566   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:55.311598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:55.327833   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:55.327866   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:55.406475   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:55.406495   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:55.406511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:55.484586   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:55.484618   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.610982   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:54.611276   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:56.611515   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:53.827394   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:55.827945   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:56.099238   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.100223   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:00.599870   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.028251   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:58.042169   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:58.042236   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:58.076836   64758 cri.go:89] found id: ""
	I0804 00:16:58.076859   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.076868   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:58.076873   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:58.076937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:58.115989   64758 cri.go:89] found id: ""
	I0804 00:16:58.116019   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.116031   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:58.116037   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:58.116099   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:58.155049   64758 cri.go:89] found id: ""
	I0804 00:16:58.155079   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.155090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:58.155097   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:58.155160   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:58.190257   64758 cri.go:89] found id: ""
	I0804 00:16:58.190293   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.190305   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:58.190315   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:58.190370   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:58.225001   64758 cri.go:89] found id: ""
	I0804 00:16:58.225029   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.225038   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:58.225061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:58.225118   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:58.268881   64758 cri.go:89] found id: ""
	I0804 00:16:58.268925   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.268937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:58.268945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:58.269010   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:58.305223   64758 cri.go:89] found id: ""
	I0804 00:16:58.305253   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.305269   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:58.305277   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:58.305340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:58.340517   64758 cri.go:89] found id: ""
	I0804 00:16:58.340548   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.340559   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:58.340570   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:58.340584   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:58.355372   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:58.355403   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:58.426292   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:58.426312   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:58.426326   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:58.509990   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:58.510034   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.550957   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:58.550988   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.104806   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:01.119379   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:01.119453   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:01.158376   64758 cri.go:89] found id: ""
	I0804 00:17:01.158407   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.158419   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:01.158426   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:01.158484   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:01.193826   64758 cri.go:89] found id: ""
	I0804 00:17:01.193858   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.193869   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:01.193876   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:01.193937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:01.228566   64758 cri.go:89] found id: ""
	I0804 00:17:01.228588   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.228600   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:01.228607   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:01.228667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:01.265736   64758 cri.go:89] found id: ""
	I0804 00:17:01.265762   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.265772   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:01.265778   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:01.265834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:01.302655   64758 cri.go:89] found id: ""
	I0804 00:17:01.302679   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.302694   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:01.302699   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:01.302753   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:01.340191   64758 cri.go:89] found id: ""
	I0804 00:17:01.340218   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.340226   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:01.340236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:01.340294   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:01.375767   64758 cri.go:89] found id: ""
	I0804 00:17:01.375789   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.375797   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:01.375802   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:01.375875   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:01.412446   64758 cri.go:89] found id: ""
	I0804 00:17:01.412479   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.412490   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:01.412502   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:01.412518   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.466271   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:01.466309   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:01.480800   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:01.480838   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:01.547909   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:01.547932   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:01.547948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:01.628318   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:01.628351   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.611854   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:01.111626   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.326831   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:00.327154   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:02.328038   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:02.601960   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:05.099489   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:04.175883   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:04.189038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:04.189098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:04.229126   64758 cri.go:89] found id: ""
	I0804 00:17:04.229158   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.229167   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:04.229174   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:04.229235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:04.264107   64758 cri.go:89] found id: ""
	I0804 00:17:04.264134   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.264142   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:04.264147   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:04.264203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:04.299959   64758 cri.go:89] found id: ""
	I0804 00:17:04.299996   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.300004   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:04.300010   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:04.300056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:04.337978   64758 cri.go:89] found id: ""
	I0804 00:17:04.338006   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.338016   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:04.338023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:04.338081   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:04.377969   64758 cri.go:89] found id: ""
	I0804 00:17:04.377993   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.378001   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:04.378006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:04.378068   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:04.413036   64758 cri.go:89] found id: ""
	I0804 00:17:04.413062   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.413071   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:04.413078   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:04.413140   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:04.450387   64758 cri.go:89] found id: ""
	I0804 00:17:04.450417   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.450426   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:04.450431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:04.450488   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:04.490132   64758 cri.go:89] found id: ""
	I0804 00:17:04.490165   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.490177   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:04.490188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:04.490204   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:04.560633   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:04.560653   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:04.560668   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:04.639409   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:04.639445   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:04.682479   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:04.682512   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:04.734823   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:04.734857   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:03.112357   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:05.610907   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:04.828050   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.327249   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.099893   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:09.100093   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.250174   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:07.263523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:07.263599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:07.300095   64758 cri.go:89] found id: ""
	I0804 00:17:07.300124   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.300136   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:07.300144   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:07.300211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:07.337798   64758 cri.go:89] found id: ""
	I0804 00:17:07.337824   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.337846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:07.337851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:07.337902   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:07.375305   64758 cri.go:89] found id: ""
	I0804 00:17:07.375337   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.375348   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:07.375356   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:07.375406   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:07.411603   64758 cri.go:89] found id: ""
	I0804 00:17:07.411629   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.411639   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:07.411646   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:07.411704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:07.450478   64758 cri.go:89] found id: ""
	I0804 00:17:07.450502   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.450511   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:07.450518   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:07.450564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:07.489972   64758 cri.go:89] found id: ""
	I0804 00:17:07.489997   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.490006   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:07.490012   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:07.490073   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:07.523685   64758 cri.go:89] found id: ""
	I0804 00:17:07.523713   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.523725   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:07.523732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:07.523789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:07.562636   64758 cri.go:89] found id: ""
	I0804 00:17:07.562665   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.562675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:07.562686   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:07.562702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:07.647968   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:07.648004   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.689829   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:07.689856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:07.738333   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:07.738366   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:07.753419   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:07.753448   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:07.829678   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.329981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:10.343676   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:10.343743   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:10.379546   64758 cri.go:89] found id: ""
	I0804 00:17:10.379575   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.379586   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:10.379594   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:10.379657   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:10.416247   64758 cri.go:89] found id: ""
	I0804 00:17:10.416271   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.416279   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:10.416284   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:10.416340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:10.455261   64758 cri.go:89] found id: ""
	I0804 00:17:10.455291   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.455303   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:10.455310   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:10.455373   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:10.493220   64758 cri.go:89] found id: ""
	I0804 00:17:10.493251   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.493262   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:10.493270   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:10.493329   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:10.538682   64758 cri.go:89] found id: ""
	I0804 00:17:10.538709   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.538720   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:10.538727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:10.538787   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:10.575509   64758 cri.go:89] found id: ""
	I0804 00:17:10.575535   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.575546   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:10.575553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:10.575609   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:10.613163   64758 cri.go:89] found id: ""
	I0804 00:17:10.613188   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.613196   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:10.613201   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:10.613260   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:10.648914   64758 cri.go:89] found id: ""
	I0804 00:17:10.648940   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.648947   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:10.648956   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:10.648968   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:10.700151   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:10.700187   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:10.714971   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:10.714998   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:10.787679   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.787698   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:10.787710   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:10.865008   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:10.865048   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.611770   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:10.110299   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:09.327569   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:11.327855   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:11.603427   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:14.100524   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:13.406150   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:13.419602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:13.419659   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:13.456823   64758 cri.go:89] found id: ""
	I0804 00:17:13.456852   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.456863   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:13.456870   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:13.456935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:13.493527   64758 cri.go:89] found id: ""
	I0804 00:17:13.493556   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.493567   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:13.493574   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:13.493697   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:13.529745   64758 cri.go:89] found id: ""
	I0804 00:17:13.529770   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.529784   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:13.529790   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:13.529856   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:13.567775   64758 cri.go:89] found id: ""
	I0804 00:17:13.567811   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.567819   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:13.567824   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:13.567888   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:13.604638   64758 cri.go:89] found id: ""
	I0804 00:17:13.604670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.604678   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:13.604685   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:13.604741   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:13.646638   64758 cri.go:89] found id: ""
	I0804 00:17:13.646670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.646679   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:13.646684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:13.646730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:13.694656   64758 cri.go:89] found id: ""
	I0804 00:17:13.694682   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.694693   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:13.694701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:13.694761   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:13.733738   64758 cri.go:89] found id: ""
	I0804 00:17:13.733762   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.733771   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:13.733780   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:13.733792   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:13.749747   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:13.749775   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:13.832826   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:13.832852   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:13.832868   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:13.914198   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:13.914233   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:13.952753   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:13.952787   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.503600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:16.516932   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:16.517004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:16.552012   64758 cri.go:89] found id: ""
	I0804 00:17:16.552037   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.552046   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:16.552052   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:16.552110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:16.590626   64758 cri.go:89] found id: ""
	I0804 00:17:16.590653   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.590660   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:16.590666   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:16.590732   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:16.628684   64758 cri.go:89] found id: ""
	I0804 00:17:16.628712   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.628723   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:16.628729   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:16.628792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:16.664934   64758 cri.go:89] found id: ""
	I0804 00:17:16.664969   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.664980   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:16.664987   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:16.665054   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:16.700098   64758 cri.go:89] found id: ""
	I0804 00:17:16.700127   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.700138   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:16.700144   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:16.700214   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:16.736761   64758 cri.go:89] found id: ""
	I0804 00:17:16.736786   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.736795   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:16.736800   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:16.736863   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:16.780010   64758 cri.go:89] found id: ""
	I0804 00:17:16.780033   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.780045   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:16.780050   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:16.780106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:16.816079   64758 cri.go:89] found id: ""
	I0804 00:17:16.816103   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.816112   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:16.816122   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:16.816136   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.866526   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:16.866560   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:16.881254   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:16.881287   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:17:12.610907   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:14.610978   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.611860   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:13.827860   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.327167   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.601482   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:19.100152   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	W0804 00:17:16.952491   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:16.952515   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:16.952530   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:17.038943   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:17.038977   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:19.580078   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:19.595538   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:19.595601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:19.632206   64758 cri.go:89] found id: ""
	I0804 00:17:19.632234   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.632245   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:19.632252   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:19.632307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:19.670335   64758 cri.go:89] found id: ""
	I0804 00:17:19.670362   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.670377   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:19.670388   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:19.670447   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:19.707772   64758 cri.go:89] found id: ""
	I0804 00:17:19.707801   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.707812   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:19.707818   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:19.707877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:19.743822   64758 cri.go:89] found id: ""
	I0804 00:17:19.743855   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.743867   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:19.743874   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:19.743930   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:19.781592   64758 cri.go:89] found id: ""
	I0804 00:17:19.781622   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.781632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:19.781640   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:19.781698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:19.818792   64758 cri.go:89] found id: ""
	I0804 00:17:19.818815   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.818823   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:19.818829   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:19.818877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:19.856486   64758 cri.go:89] found id: ""
	I0804 00:17:19.856511   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.856522   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:19.856528   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:19.856586   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:19.901721   64758 cri.go:89] found id: ""
	I0804 00:17:19.901743   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.901754   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:19.901764   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:19.901780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:19.980095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:19.980119   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:19.980134   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:20.072699   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:20.072750   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:20.159007   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:20.159038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:20.211785   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:20.211818   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:19.110218   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:21.110572   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:18.828527   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:20.828554   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:21.600968   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:23.602526   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.603220   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:22.727235   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:22.740922   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:22.740996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:22.780356   64758 cri.go:89] found id: ""
	I0804 00:17:22.780381   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.780392   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:22.780400   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:22.780459   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:22.817075   64758 cri.go:89] found id: ""
	I0804 00:17:22.817100   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.817111   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:22.817119   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:22.817182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:22.857213   64758 cri.go:89] found id: ""
	I0804 00:17:22.857243   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.857253   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:22.857260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:22.857325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:22.894049   64758 cri.go:89] found id: ""
	I0804 00:17:22.894085   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.894096   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:22.894104   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:22.894171   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:22.929718   64758 cri.go:89] found id: ""
	I0804 00:17:22.929746   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.929756   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:22.929770   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:22.929843   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:22.964863   64758 cri.go:89] found id: ""
	I0804 00:17:22.964892   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.964901   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:22.964907   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:22.964958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:23.002565   64758 cri.go:89] found id: ""
	I0804 00:17:23.002593   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.002603   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:23.002611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:23.002676   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:23.038161   64758 cri.go:89] found id: ""
	I0804 00:17:23.038188   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.038199   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:23.038211   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:23.038224   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:23.091865   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:23.091903   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:23.108358   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:23.108388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:23.186417   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:23.186438   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:23.186453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.269119   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:23.269161   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:25.812405   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:25.833174   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:25.833253   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:25.881654   64758 cri.go:89] found id: ""
	I0804 00:17:25.881681   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.881690   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:25.881696   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:25.881757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:25.936968   64758 cri.go:89] found id: ""
	I0804 00:17:25.936997   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.937006   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:25.937011   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:25.937066   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:25.972437   64758 cri.go:89] found id: ""
	I0804 00:17:25.972462   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.972470   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:25.972475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:25.972529   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:26.008306   64758 cri.go:89] found id: ""
	I0804 00:17:26.008346   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.008357   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:26.008366   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:26.008435   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:26.045593   64758 cri.go:89] found id: ""
	I0804 00:17:26.045620   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.045632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:26.045639   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:26.045696   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:26.084170   64758 cri.go:89] found id: ""
	I0804 00:17:26.084195   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.084205   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:26.084212   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:26.084272   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:26.122524   64758 cri.go:89] found id: ""
	I0804 00:17:26.122551   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.122559   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:26.122565   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:26.122623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:26.159264   64758 cri.go:89] found id: ""
	I0804 00:17:26.159297   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.159308   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:26.159320   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:26.159337   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:26.205692   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:26.205718   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:26.257286   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:26.257321   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:26.271582   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:26.271611   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:26.344562   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:26.344586   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:26.344598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.112800   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.610507   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:23.327294   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.828519   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:28.100160   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:30.100351   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:28.929410   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:28.943941   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:28.944003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:28.986127   64758 cri.go:89] found id: ""
	I0804 00:17:28.986157   64758 logs.go:276] 0 containers: []
	W0804 00:17:28.986169   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:28.986176   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:28.986237   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:29.025528   64758 cri.go:89] found id: ""
	I0804 00:17:29.025556   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.025564   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:29.025570   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:29.025624   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:29.059525   64758 cri.go:89] found id: ""
	I0804 00:17:29.059553   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.059561   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:29.059566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:29.059614   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:29.097451   64758 cri.go:89] found id: ""
	I0804 00:17:29.097489   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.097499   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:29.097506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:29.097564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:29.135504   64758 cri.go:89] found id: ""
	I0804 00:17:29.135532   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.135540   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:29.135546   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:29.135601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:29.175277   64758 cri.go:89] found id: ""
	I0804 00:17:29.175314   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.175324   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:29.175332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:29.175391   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:29.210275   64758 cri.go:89] found id: ""
	I0804 00:17:29.210303   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.210314   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:29.210321   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:29.210382   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:29.246138   64758 cri.go:89] found id: ""
	I0804 00:17:29.246174   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.246186   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:29.246196   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:29.246213   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:29.298935   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:29.298971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:29.313342   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:29.313388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:29.384609   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:29.384635   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:29.384650   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:29.461759   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:29.461795   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:27.611021   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:29.612149   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:27.831367   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:30.327878   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.328772   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.101073   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.600832   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.010152   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:32.023609   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:32.023677   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:32.062480   64758 cri.go:89] found id: ""
	I0804 00:17:32.062508   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.062517   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:32.062523   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:32.062590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:32.099601   64758 cri.go:89] found id: ""
	I0804 00:17:32.099627   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.099634   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:32.099640   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:32.099691   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:32.138651   64758 cri.go:89] found id: ""
	I0804 00:17:32.138680   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.138689   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:32.138694   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:32.138751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:32.182224   64758 cri.go:89] found id: ""
	I0804 00:17:32.182249   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.182257   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:32.182264   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:32.182318   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:32.224381   64758 cri.go:89] found id: ""
	I0804 00:17:32.224410   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.224421   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:32.224429   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:32.224486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:32.261569   64758 cri.go:89] found id: ""
	I0804 00:17:32.261600   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.261609   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:32.261615   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:32.261663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:32.304769   64758 cri.go:89] found id: ""
	I0804 00:17:32.304793   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.304807   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:32.304814   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:32.304867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:32.348695   64758 cri.go:89] found id: ""
	I0804 00:17:32.348727   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.348736   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:32.348745   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:32.348757   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.389444   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:32.389473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:32.442901   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:32.442938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:32.457562   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:32.457588   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:32.529121   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:32.529144   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:32.529160   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.114712   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:35.129725   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:35.129795   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:35.167226   64758 cri.go:89] found id: ""
	I0804 00:17:35.167248   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.167257   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:35.167262   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:35.167310   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:35.200889   64758 cri.go:89] found id: ""
	I0804 00:17:35.200914   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.200922   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:35.200927   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:35.201000   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:35.234899   64758 cri.go:89] found id: ""
	I0804 00:17:35.234927   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.234938   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:35.234945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:35.235003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:35.271355   64758 cri.go:89] found id: ""
	I0804 00:17:35.271393   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.271405   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:35.271412   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:35.271471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:35.313557   64758 cri.go:89] found id: ""
	I0804 00:17:35.313585   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.313595   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:35.313602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:35.313663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:35.352931   64758 cri.go:89] found id: ""
	I0804 00:17:35.352960   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.352971   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:35.352979   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:35.353046   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:35.391202   64758 cri.go:89] found id: ""
	I0804 00:17:35.391232   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.391256   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:35.391263   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:35.391337   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:35.427599   64758 cri.go:89] found id: ""
	I0804 00:17:35.427627   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.427638   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:35.427649   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:35.427666   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:35.482025   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:35.482061   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:35.498274   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:35.498303   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:35.572606   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:35.572631   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:35.572644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.655534   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:35.655566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.114835   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.610785   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.827077   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:36.827108   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:36.601588   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:38.602210   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:40.602295   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:38.205756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:38.218974   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:38.219044   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:38.253798   64758 cri.go:89] found id: ""
	I0804 00:17:38.253827   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.253839   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:38.253852   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:38.253911   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:38.291074   64758 cri.go:89] found id: ""
	I0804 00:17:38.291102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.291113   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:38.291120   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:38.291182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:38.332097   64758 cri.go:89] found id: ""
	I0804 00:17:38.332123   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.332133   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:38.332140   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:38.332198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:38.370074   64758 cri.go:89] found id: ""
	I0804 00:17:38.370102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.370110   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:38.370117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:38.370176   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:38.406962   64758 cri.go:89] found id: ""
	I0804 00:17:38.406984   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.406993   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:38.406998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:38.407051   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:38.447532   64758 cri.go:89] found id: ""
	I0804 00:17:38.447562   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.447572   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:38.447579   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:38.447653   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:38.484326   64758 cri.go:89] found id: ""
	I0804 00:17:38.484356   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.484368   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:38.484375   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:38.484444   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:38.521831   64758 cri.go:89] found id: ""
	I0804 00:17:38.521858   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.521869   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:38.521880   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:38.521893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:38.570540   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:38.570569   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:38.624921   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:38.624953   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:38.639451   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:38.639477   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:38.714435   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:38.714459   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:38.714475   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:41.295160   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:41.310032   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:41.310108   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:41.350363   64758 cri.go:89] found id: ""
	I0804 00:17:41.350393   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.350404   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:41.350412   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:41.350475   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:41.391662   64758 cri.go:89] found id: ""
	I0804 00:17:41.391691   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.391698   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:41.391703   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:41.391760   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:41.429653   64758 cri.go:89] found id: ""
	I0804 00:17:41.429678   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.429686   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:41.429692   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:41.429739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:41.469456   64758 cri.go:89] found id: ""
	I0804 00:17:41.469483   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.469494   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:41.469505   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:41.469566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:41.506124   64758 cri.go:89] found id: ""
	I0804 00:17:41.506154   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.506164   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:41.506171   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:41.506234   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:41.543139   64758 cri.go:89] found id: ""
	I0804 00:17:41.543171   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.543182   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:41.543190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:41.543252   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:41.580537   64758 cri.go:89] found id: ""
	I0804 00:17:41.580568   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.580578   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:41.580585   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:41.580652   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:41.619828   64758 cri.go:89] found id: ""
	I0804 00:17:41.619854   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.619862   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:41.619869   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:41.619882   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:41.660749   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:41.660780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:41.712889   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:41.712924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:41.726422   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:41.726447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:41.805673   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:41.805697   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:41.805712   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:37.110193   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:39.110927   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:41.111203   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:39.327800   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:41.327910   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:43.099815   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:45.101262   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:44.386563   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:44.399891   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:44.399954   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:44.434270   64758 cri.go:89] found id: ""
	I0804 00:17:44.434297   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.434305   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:44.434311   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:44.434372   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:44.469423   64758 cri.go:89] found id: ""
	I0804 00:17:44.469454   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.469463   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:44.469468   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:44.469535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:44.505511   64758 cri.go:89] found id: ""
	I0804 00:17:44.505539   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.505547   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:44.505553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:44.505602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:44.540897   64758 cri.go:89] found id: ""
	I0804 00:17:44.540922   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.540932   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:44.540937   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:44.540996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:44.578722   64758 cri.go:89] found id: ""
	I0804 00:17:44.578747   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.578755   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:44.578760   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:44.578812   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:44.615838   64758 cri.go:89] found id: ""
	I0804 00:17:44.615863   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.615874   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:44.615881   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:44.615940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:44.657695   64758 cri.go:89] found id: ""
	I0804 00:17:44.657724   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.657734   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:44.657741   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:44.657916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:44.695852   64758 cri.go:89] found id: ""
	I0804 00:17:44.695882   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.695892   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:44.695901   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:44.695912   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:44.754643   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:44.754687   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:44.773964   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:44.773994   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:44.857544   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:44.857567   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:44.857583   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:44.952987   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:44.953027   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:43.610772   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:45.611480   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:43.827218   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:46.327323   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:47.600755   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.099574   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:47.504957   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:47.520153   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:47.520232   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:47.557303   64758 cri.go:89] found id: ""
	I0804 00:17:47.557326   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.557334   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:47.557339   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:47.557410   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:47.595626   64758 cri.go:89] found id: ""
	I0804 00:17:47.595655   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.595665   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:47.595675   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:47.595733   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:47.633430   64758 cri.go:89] found id: ""
	I0804 00:17:47.633458   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.633466   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:47.633472   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:47.633525   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:47.670116   64758 cri.go:89] found id: ""
	I0804 00:17:47.670140   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.670149   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:47.670154   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:47.670200   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:47.709019   64758 cri.go:89] found id: ""
	I0804 00:17:47.709042   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.709050   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:47.709055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:47.709111   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:47.745230   64758 cri.go:89] found id: ""
	I0804 00:17:47.745251   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.745259   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:47.745265   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:47.745319   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:47.787957   64758 cri.go:89] found id: ""
	I0804 00:17:47.787985   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.787996   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:47.788004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:47.788063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:47.821451   64758 cri.go:89] found id: ""
	I0804 00:17:47.821477   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.821488   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:47.821498   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:47.821516   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:47.903035   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:47.903139   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:47.903162   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:47.986659   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:47.986702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:48.037921   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:48.037951   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:48.095354   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:48.095389   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:50.613264   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:50.627717   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:50.627792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:50.669311   64758 cri.go:89] found id: ""
	I0804 00:17:50.669338   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.669347   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:50.669370   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:50.669438   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:50.714674   64758 cri.go:89] found id: ""
	I0804 00:17:50.714704   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.714713   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:50.714718   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:50.714769   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:50.755291   64758 cri.go:89] found id: ""
	I0804 00:17:50.755318   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.755326   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:50.755332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:50.755394   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:50.801927   64758 cri.go:89] found id: ""
	I0804 00:17:50.801955   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.801964   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:50.801970   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:50.802020   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:50.845096   64758 cri.go:89] found id: ""
	I0804 00:17:50.845121   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.845130   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:50.845136   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:50.845193   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:50.882664   64758 cri.go:89] found id: ""
	I0804 00:17:50.882694   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.882705   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:50.882712   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:50.882771   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:50.921233   64758 cri.go:89] found id: ""
	I0804 00:17:50.921260   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.921268   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:50.921273   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:50.921326   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:50.955254   64758 cri.go:89] found id: ""
	I0804 00:17:50.955286   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.955298   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:50.955311   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:50.955329   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:51.010001   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:51.010037   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:51.024943   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:51.024966   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:51.096095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:51.096123   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:51.096139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:51.177829   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:51.177864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:47.611778   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.110408   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:48.328693   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.828022   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:52.609609   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:55.100616   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:53.720665   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:53.736318   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:53.736380   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:53.772887   64758 cri.go:89] found id: ""
	I0804 00:17:53.772916   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.772926   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:53.772934   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:53.772995   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:53.811771   64758 cri.go:89] found id: ""
	I0804 00:17:53.811797   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.811837   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:53.811845   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:53.811906   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:53.846684   64758 cri.go:89] found id: ""
	I0804 00:17:53.846716   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.846726   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:53.846736   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:53.846798   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:53.883550   64758 cri.go:89] found id: ""
	I0804 00:17:53.883581   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.883592   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:53.883600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:53.883662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:53.921031   64758 cri.go:89] found id: ""
	I0804 00:17:53.921061   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.921072   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:53.921080   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:53.921153   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:53.960338   64758 cri.go:89] found id: ""
	I0804 00:17:53.960364   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.960374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:53.960381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:53.960441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:53.998404   64758 cri.go:89] found id: ""
	I0804 00:17:53.998434   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.998450   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:53.998458   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:53.998520   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:54.033417   64758 cri.go:89] found id: ""
	I0804 00:17:54.033444   64758 logs.go:276] 0 containers: []
	W0804 00:17:54.033453   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:54.033461   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:54.033473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:54.071945   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:54.071971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:54.124614   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:54.124644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:54.140757   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:54.140783   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:54.241735   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:54.241754   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:54.241769   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:56.821591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:56.836569   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:56.836631   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:56.872013   64758 cri.go:89] found id: ""
	I0804 00:17:56.872039   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.872048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:56.872054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:56.872110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:52.612077   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:55.111566   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:52.828335   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:54.830625   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:56.831382   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:57.101663   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.600253   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:56.908022   64758 cri.go:89] found id: ""
	I0804 00:17:56.908051   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.908061   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:56.908067   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:56.908114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:56.943309   64758 cri.go:89] found id: ""
	I0804 00:17:56.943336   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.943347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:56.943359   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:56.943415   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:56.977799   64758 cri.go:89] found id: ""
	I0804 00:17:56.977839   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.977847   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:56.977853   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:56.977916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:57.015185   64758 cri.go:89] found id: ""
	I0804 00:17:57.015213   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.015223   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:57.015237   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:57.015295   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:57.051856   64758 cri.go:89] found id: ""
	I0804 00:17:57.051879   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.051887   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:57.051893   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:57.051944   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:57.086349   64758 cri.go:89] found id: ""
	I0804 00:17:57.086376   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.086387   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:57.086393   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:57.086439   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:57.125005   64758 cri.go:89] found id: ""
	I0804 00:17:57.125048   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.125064   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:57.125076   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:57.125090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:57.200348   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:57.200382   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:57.240899   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:57.240924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.294331   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:57.294375   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:57.308388   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:57.308429   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:57.382602   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:59.883070   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:59.897055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:59.897116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:59.932983   64758 cri.go:89] found id: ""
	I0804 00:17:59.933012   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.933021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:59.933029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:59.933088   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:59.971781   64758 cri.go:89] found id: ""
	I0804 00:17:59.971807   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.971815   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:59.971820   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:59.971878   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:00.008381   64758 cri.go:89] found id: ""
	I0804 00:18:00.008406   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.008414   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:00.008419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:00.008483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:00.053257   64758 cri.go:89] found id: ""
	I0804 00:18:00.053281   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.053290   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:00.053295   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:00.053342   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:00.089891   64758 cri.go:89] found id: ""
	I0804 00:18:00.089925   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.089936   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:00.089943   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:00.090008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:00.129833   64758 cri.go:89] found id: ""
	I0804 00:18:00.129863   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.129875   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:00.129884   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:00.129942   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:00.181324   64758 cri.go:89] found id: ""
	I0804 00:18:00.181390   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.181403   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:00.181410   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:00.181471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:00.224426   64758 cri.go:89] found id: ""
	I0804 00:18:00.224451   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.224459   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:00.224467   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:00.224481   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:00.240122   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:00.240155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:00.317324   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:00.317346   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:00.317379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:00.398917   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:00.398952   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:00.440730   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:00.440758   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.111741   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.611509   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.327597   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:01.328678   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:02.099384   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:04.100512   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:02.992128   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:03.006787   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:03.006870   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:03.041291   64758 cri.go:89] found id: ""
	I0804 00:18:03.041321   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.041332   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:03.041341   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:03.041427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:03.077822   64758 cri.go:89] found id: ""
	I0804 00:18:03.077851   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.077863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:03.077871   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:03.077928   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:03.116579   64758 cri.go:89] found id: ""
	I0804 00:18:03.116603   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.116611   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:03.116616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:03.116662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:03.154904   64758 cri.go:89] found id: ""
	I0804 00:18:03.154931   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.154942   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:03.154950   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:03.155018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:03.190300   64758 cri.go:89] found id: ""
	I0804 00:18:03.190328   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.190341   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:03.190349   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:03.190413   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:03.225975   64758 cri.go:89] found id: ""
	I0804 00:18:03.226004   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.226016   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:03.226023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:03.226087   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:03.271499   64758 cri.go:89] found id: ""
	I0804 00:18:03.271525   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.271535   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:03.271543   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:03.271602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:03.308643   64758 cri.go:89] found id: ""
	I0804 00:18:03.308668   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.308675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:03.308684   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:03.308698   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:03.324528   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:03.324562   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:03.401102   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:03.401125   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:03.401139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:03.481817   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:03.481864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:03.522568   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:03.522601   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.074678   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:06.089765   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:06.089844   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:06.128372   64758 cri.go:89] found id: ""
	I0804 00:18:06.128400   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.128411   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:06.128419   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:06.128467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:06.169488   64758 cri.go:89] found id: ""
	I0804 00:18:06.169515   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.169525   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:06.169532   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:06.169590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:06.207969   64758 cri.go:89] found id: ""
	I0804 00:18:06.207998   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.208009   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:06.208015   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:06.208067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:06.244497   64758 cri.go:89] found id: ""
	I0804 00:18:06.244521   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.244529   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:06.244535   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:06.244592   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:06.282905   64758 cri.go:89] found id: ""
	I0804 00:18:06.282935   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.282945   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:06.282952   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:06.283013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:06.322498   64758 cri.go:89] found id: ""
	I0804 00:18:06.322523   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.322530   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:06.322537   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:06.322583   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:06.361377   64758 cri.go:89] found id: ""
	I0804 00:18:06.361402   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.361412   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:06.361420   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:06.361485   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:06.402082   64758 cri.go:89] found id: ""
	I0804 00:18:06.402112   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.402120   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:06.402128   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:06.402141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.452052   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:06.452089   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:06.466695   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:06.466734   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:06.546115   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:06.546140   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:06.546155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:06.639671   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:06.639708   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:02.111360   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:04.610774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:06.612557   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:03.330392   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:05.828925   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:06.603713   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:09.100060   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:09.193473   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:09.207696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:09.207755   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:09.247757   64758 cri.go:89] found id: ""
	I0804 00:18:09.247784   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.247795   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:09.247802   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:09.247867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:09.285516   64758 cri.go:89] found id: ""
	I0804 00:18:09.285549   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.285559   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:09.285567   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:09.285628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:09.321689   64758 cri.go:89] found id: ""
	I0804 00:18:09.321715   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.321725   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:09.321732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:09.321789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:09.358135   64758 cri.go:89] found id: ""
	I0804 00:18:09.358158   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.358166   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:09.358176   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:09.358223   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:09.393642   64758 cri.go:89] found id: ""
	I0804 00:18:09.393667   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.393675   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:09.393681   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:09.393730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:09.430651   64758 cri.go:89] found id: ""
	I0804 00:18:09.430674   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.430683   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:09.430689   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:09.430734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:09.472433   64758 cri.go:89] found id: ""
	I0804 00:18:09.472460   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.472469   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:09.472474   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:09.472533   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:09.511147   64758 cri.go:89] found id: ""
	I0804 00:18:09.511171   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.511179   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:09.511187   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:09.511207   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:09.560099   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:09.560142   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:09.574609   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:09.574641   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:09.646863   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:09.646891   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:09.646906   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:09.727309   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:09.727352   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:09.111726   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:11.611445   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:08.329278   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:10.827361   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:11.600326   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:14.099811   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:12.268925   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:12.284737   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:12.284813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:12.326015   64758 cri.go:89] found id: ""
	I0804 00:18:12.326036   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.326044   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:12.326049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:12.326095   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:12.374096   64758 cri.go:89] found id: ""
	I0804 00:18:12.374129   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.374138   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:12.374143   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:12.374235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:12.426467   64758 cri.go:89] found id: ""
	I0804 00:18:12.426493   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.426502   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:12.426509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:12.426570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:12.485034   64758 cri.go:89] found id: ""
	I0804 00:18:12.485060   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.485072   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:12.485079   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:12.485138   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:12.523490   64758 cri.go:89] found id: ""
	I0804 00:18:12.523517   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.523525   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:12.523530   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:12.523577   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:12.563318   64758 cri.go:89] found id: ""
	I0804 00:18:12.563347   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.563358   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:12.563365   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:12.563425   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:12.600455   64758 cri.go:89] found id: ""
	I0804 00:18:12.600482   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.600492   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:12.600499   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:12.600566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:12.641146   64758 cri.go:89] found id: ""
	I0804 00:18:12.641170   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.641178   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:12.641186   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:12.641197   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:12.697240   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:12.697274   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:12.711399   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:12.711432   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:12.794022   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:12.794050   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:12.794067   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:12.881327   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:12.881379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:15.425765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:15.439338   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:15.439420   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:15.477964   64758 cri.go:89] found id: ""
	I0804 00:18:15.477991   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.478002   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:15.478009   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:15.478069   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:15.514554   64758 cri.go:89] found id: ""
	I0804 00:18:15.514574   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.514583   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:15.514588   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:15.514636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:15.549702   64758 cri.go:89] found id: ""
	I0804 00:18:15.549732   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.549741   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:15.549747   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:15.549813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:15.584619   64758 cri.go:89] found id: ""
	I0804 00:18:15.584663   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.584675   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:15.584683   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:15.584746   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:15.625084   64758 cri.go:89] found id: ""
	I0804 00:18:15.625111   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.625121   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:15.625128   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:15.625192   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:15.666629   64758 cri.go:89] found id: ""
	I0804 00:18:15.666655   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.666664   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:15.666673   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:15.666726   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:15.704287   64758 cri.go:89] found id: ""
	I0804 00:18:15.704316   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.704324   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:15.704330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:15.704383   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:15.740629   64758 cri.go:89] found id: ""
	I0804 00:18:15.740659   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.740668   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:15.740678   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:15.740702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:15.794093   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:15.794124   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:15.807629   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:15.807659   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:15.887638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:15.887665   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:15.887680   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:15.972935   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:15.972978   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:13.611758   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:15.613472   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:13.327640   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:15.827432   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:16.100599   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:18.603192   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:18.518022   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:18.532360   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:18.532433   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:18.565519   64758 cri.go:89] found id: ""
	I0804 00:18:18.565544   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.565552   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:18.565557   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:18.565612   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:18.599939   64758 cri.go:89] found id: ""
	I0804 00:18:18.599967   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.599978   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:18.599985   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:18.600055   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:18.639035   64758 cri.go:89] found id: ""
	I0804 00:18:18.639062   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.639070   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:18.639076   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:18.639124   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:18.677188   64758 cri.go:89] found id: ""
	I0804 00:18:18.677210   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.677218   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:18.677223   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:18.677268   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:18.715892   64758 cri.go:89] found id: ""
	I0804 00:18:18.715921   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.715932   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:18.715940   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:18.716005   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:18.752274   64758 cri.go:89] found id: ""
	I0804 00:18:18.752298   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.752307   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:18.752313   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:18.752368   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:18.795251   64758 cri.go:89] found id: ""
	I0804 00:18:18.795279   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.795288   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:18.795293   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:18.795353   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.830842   64758 cri.go:89] found id: ""
	I0804 00:18:18.830866   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.830874   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:18.830882   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:18.830893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:18.883687   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:18.883719   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:18.898406   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:18.898433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:18.973191   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:18.973215   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:18.973231   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:19.054094   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:19.054141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:21.597245   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:21.612534   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:21.612605   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:21.649391   64758 cri.go:89] found id: ""
	I0804 00:18:21.649415   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.649426   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:21.649434   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:21.649492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:21.683202   64758 cri.go:89] found id: ""
	I0804 00:18:21.683226   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.683233   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:21.683244   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:21.683300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:21.717450   64758 cri.go:89] found id: ""
	I0804 00:18:21.717475   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.717484   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:21.717489   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:21.717547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:21.752559   64758 cri.go:89] found id: ""
	I0804 00:18:21.752588   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.752596   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:21.752602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:21.752650   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:21.788336   64758 cri.go:89] found id: ""
	I0804 00:18:21.788364   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.788375   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:21.788381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:21.788428   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:21.829404   64758 cri.go:89] found id: ""
	I0804 00:18:21.829428   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.829436   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:21.829443   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:21.829502   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:21.869473   64758 cri.go:89] found id: ""
	I0804 00:18:21.869504   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.869515   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:21.869521   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:21.869587   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.111377   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:20.610253   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:17.827889   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:20.327830   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:21.100486   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:23.599788   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:25.601620   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:21.909883   64758 cri.go:89] found id: ""
	I0804 00:18:21.909907   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.909915   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:21.909923   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:21.909940   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:21.925038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:21.925071   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:22.000261   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.000281   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:22.000294   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:22.082813   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:22.082846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:22.126741   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:22.126774   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:24.677246   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:24.692404   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:24.692467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:24.739001   64758 cri.go:89] found id: ""
	I0804 00:18:24.739039   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.739049   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:24.739054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:24.739119   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:24.779558   64758 cri.go:89] found id: ""
	I0804 00:18:24.779586   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.779597   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:24.779605   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:24.779664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:24.819257   64758 cri.go:89] found id: ""
	I0804 00:18:24.819284   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.819295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:24.819301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:24.819363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:24.862504   64758 cri.go:89] found id: ""
	I0804 00:18:24.862531   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.862539   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:24.862544   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:24.862599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:24.899605   64758 cri.go:89] found id: ""
	I0804 00:18:24.899637   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.899649   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:24.899656   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:24.899716   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:24.936575   64758 cri.go:89] found id: ""
	I0804 00:18:24.936604   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.936612   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:24.936618   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:24.936667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:24.971736   64758 cri.go:89] found id: ""
	I0804 00:18:24.971764   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.971775   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:24.971782   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:24.971851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:25.010214   64758 cri.go:89] found id: ""
	I0804 00:18:25.010244   64758 logs.go:276] 0 containers: []
	W0804 00:18:25.010253   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:25.010265   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:25.010279   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:25.091145   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:25.091186   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:25.137574   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:25.137603   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:25.189559   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:25.189593   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:25.204725   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:25.204763   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:25.278903   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.612077   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:25.111666   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:22.827542   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:24.829587   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:27.326922   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:28.100576   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:30.603955   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:27.779500   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:27.793548   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:27.793628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:27.830811   64758 cri.go:89] found id: ""
	I0804 00:18:27.830844   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.830854   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:27.830862   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:27.830919   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:27.869966   64758 cri.go:89] found id: ""
	I0804 00:18:27.869991   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.869998   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:27.870004   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:27.870062   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:27.909474   64758 cri.go:89] found id: ""
	I0804 00:18:27.909496   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.909504   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:27.909509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:27.909567   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:27.948588   64758 cri.go:89] found id: ""
	I0804 00:18:27.948613   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.948625   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:27.948632   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:27.948704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:27.991957   64758 cri.go:89] found id: ""
	I0804 00:18:27.991979   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.991987   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:27.991993   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:27.992052   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:28.029516   64758 cri.go:89] found id: ""
	I0804 00:18:28.029544   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.029555   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:28.029562   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:28.029627   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:28.067851   64758 cri.go:89] found id: ""
	I0804 00:18:28.067879   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.067891   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:28.067898   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:28.067957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:28.107488   64758 cri.go:89] found id: ""
	I0804 00:18:28.107514   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.107524   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:28.107534   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:28.107548   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:28.158490   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:28.158523   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:28.172000   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:28.172030   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:28.247803   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:28.247823   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:28.247839   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:28.326695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:28.326727   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:30.867241   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:30.881074   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:30.881146   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:30.919078   64758 cri.go:89] found id: ""
	I0804 00:18:30.919105   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.919115   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:30.919122   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:30.919184   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:30.954436   64758 cri.go:89] found id: ""
	I0804 00:18:30.954463   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.954474   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:30.954481   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:30.954546   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:30.993080   64758 cri.go:89] found id: ""
	I0804 00:18:30.993110   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.993121   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:30.993129   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:30.993188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:31.031465   64758 cri.go:89] found id: ""
	I0804 00:18:31.031493   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.031504   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:31.031512   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:31.031570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:31.067374   64758 cri.go:89] found id: ""
	I0804 00:18:31.067405   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.067416   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:31.067423   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:31.067493   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:31.104021   64758 cri.go:89] found id: ""
	I0804 00:18:31.104048   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.104059   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:31.104066   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:31.104128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:31.146995   64758 cri.go:89] found id: ""
	I0804 00:18:31.147023   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.147033   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:31.147040   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:31.147106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:31.184708   64758 cri.go:89] found id: ""
	I0804 00:18:31.184739   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.184749   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:31.184760   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:31.184776   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:31.237743   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:31.237781   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:31.252038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:31.252070   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:31.326357   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:31.326380   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:31.326401   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:31.408212   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:31.408256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:27.610666   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:29.610899   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:31.611472   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:29.827980   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:32.326666   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:33.099814   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:35.100740   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:33.954396   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:33.968311   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:33.968384   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:34.006574   64758 cri.go:89] found id: ""
	I0804 00:18:34.006605   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.006625   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:34.006635   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:34.006698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:34.042400   64758 cri.go:89] found id: ""
	I0804 00:18:34.042427   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.042435   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:34.042441   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:34.042492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:34.080769   64758 cri.go:89] found id: ""
	I0804 00:18:34.080793   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.080804   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:34.080810   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:34.080877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:34.118283   64758 cri.go:89] found id: ""
	I0804 00:18:34.118311   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.118320   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:34.118326   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:34.118377   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:34.153679   64758 cri.go:89] found id: ""
	I0804 00:18:34.153708   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.153719   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:34.153727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:34.153780   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:34.189618   64758 cri.go:89] found id: ""
	I0804 00:18:34.189674   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.189686   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:34.189696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:34.189770   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:34.224628   64758 cri.go:89] found id: ""
	I0804 00:18:34.224666   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.224677   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:34.224684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:34.224744   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:34.265343   64758 cri.go:89] found id: ""
	I0804 00:18:34.265389   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.265399   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:34.265409   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:34.265428   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:34.337992   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:34.338014   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:34.338025   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:34.420224   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:34.420263   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:34.462009   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:34.462042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:34.520087   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:34.520120   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:34.111351   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:36.112271   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:34.328807   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:36.827190   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:37.599447   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:40.099291   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:37.035398   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:37.048955   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:37.049024   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:37.087433   64758 cri.go:89] found id: ""
	I0804 00:18:37.087460   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.087470   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:37.087478   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:37.087542   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:37.128227   64758 cri.go:89] found id: ""
	I0804 00:18:37.128255   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.128267   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:37.128275   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:37.128328   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:37.165371   64758 cri.go:89] found id: ""
	I0804 00:18:37.165405   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.165415   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:37.165424   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:37.165486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:37.201168   64758 cri.go:89] found id: ""
	I0804 00:18:37.201198   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.201209   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:37.201217   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:37.201278   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:37.237378   64758 cri.go:89] found id: ""
	I0804 00:18:37.237406   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.237414   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:37.237419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:37.237465   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:37.273425   64758 cri.go:89] found id: ""
	I0804 00:18:37.273456   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.273467   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:37.273475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:37.273547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:37.313019   64758 cri.go:89] found id: ""
	I0804 00:18:37.313048   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.313056   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:37.313061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:37.313116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:37.354741   64758 cri.go:89] found id: ""
	I0804 00:18:37.354771   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.354779   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:37.354788   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:37.354800   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:37.408703   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:37.408740   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:37.423393   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:37.423419   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:37.497460   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:37.497487   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:37.497501   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:37.579811   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:37.579856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:40.122872   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:40.139106   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:40.139177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:40.178571   64758 cri.go:89] found id: ""
	I0804 00:18:40.178599   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.178610   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:40.178617   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:40.178679   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:40.215680   64758 cri.go:89] found id: ""
	I0804 00:18:40.215714   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.215722   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:40.215728   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:40.215776   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:40.250618   64758 cri.go:89] found id: ""
	I0804 00:18:40.250647   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.250658   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:40.250666   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:40.250729   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:40.289195   64758 cri.go:89] found id: ""
	I0804 00:18:40.289223   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.289233   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:40.289240   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:40.289296   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:40.330961   64758 cri.go:89] found id: ""
	I0804 00:18:40.330988   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.330998   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:40.331006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:40.331056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:40.376435   64758 cri.go:89] found id: ""
	I0804 00:18:40.376465   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.376478   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:40.376487   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:40.376558   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:40.416415   64758 cri.go:89] found id: ""
	I0804 00:18:40.416447   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.416459   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:40.416465   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:40.416535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:40.452958   64758 cri.go:89] found id: ""
	I0804 00:18:40.452996   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.453007   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:40.453018   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:40.453036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:40.503775   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:40.503808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:40.517825   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:40.517855   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:40.587818   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:40.587847   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:40.587861   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:40.674139   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:40.674183   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:38.611068   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:40.611923   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:39.326489   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:41.327327   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:42.100795   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:44.602441   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:43.217266   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:43.232190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:43.232262   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:43.270127   64758 cri.go:89] found id: ""
	I0804 00:18:43.270156   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.270163   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:43.270169   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:43.270219   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:43.309401   64758 cri.go:89] found id: ""
	I0804 00:18:43.309429   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.309439   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:43.309446   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:43.309503   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:43.347210   64758 cri.go:89] found id: ""
	I0804 00:18:43.347235   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.347242   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:43.347247   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:43.347300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:43.382548   64758 cri.go:89] found id: ""
	I0804 00:18:43.382578   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.382588   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:43.382595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:43.382658   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:43.422076   64758 cri.go:89] found id: ""
	I0804 00:18:43.422102   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.422113   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:43.422121   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:43.422168   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:43.458525   64758 cri.go:89] found id: ""
	I0804 00:18:43.458552   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.458560   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:43.458566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:43.458623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:43.498134   64758 cri.go:89] found id: ""
	I0804 00:18:43.498157   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.498165   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:43.498170   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:43.498217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:43.543289   64758 cri.go:89] found id: ""
	I0804 00:18:43.543312   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.543320   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:43.543328   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:43.543338   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:43.593489   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:43.593521   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:43.607838   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:43.607869   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:43.682791   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:43.682813   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:43.682826   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:43.761695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:43.761737   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.305385   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:46.320003   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:46.320063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:46.367941   64758 cri.go:89] found id: ""
	I0804 00:18:46.367969   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.367980   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:46.367986   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:46.368058   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:46.422540   64758 cri.go:89] found id: ""
	I0804 00:18:46.422563   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.422572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:46.422578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:46.422637   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:46.470192   64758 cri.go:89] found id: ""
	I0804 00:18:46.470238   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.470248   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:46.470257   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:46.470316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:46.512375   64758 cri.go:89] found id: ""
	I0804 00:18:46.512399   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.512408   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:46.512413   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:46.512471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:46.546547   64758 cri.go:89] found id: ""
	I0804 00:18:46.546580   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.546592   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:46.546600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:46.546665   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:46.583598   64758 cri.go:89] found id: ""
	I0804 00:18:46.583621   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.583630   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:46.583636   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:46.583692   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:46.621066   64758 cri.go:89] found id: ""
	I0804 00:18:46.621101   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.621116   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:46.621122   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:46.621177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:46.654115   64758 cri.go:89] found id: ""
	I0804 00:18:46.654149   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.654162   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:46.654174   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:46.654191   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:46.738542   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:46.738582   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.778894   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:46.778923   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:46.833225   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:46.833257   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:46.847222   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:46.847247   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:18:42.612522   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:45.110927   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:43.327420   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:45.327936   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:47.328380   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:46.604576   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.100232   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	W0804 00:18:46.922590   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.423639   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:49.437417   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:49.437490   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:49.474889   64758 cri.go:89] found id: ""
	I0804 00:18:49.474914   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.474923   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:49.474929   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:49.474986   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:49.512860   64758 cri.go:89] found id: ""
	I0804 00:18:49.512889   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.512900   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:49.512908   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:49.512965   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:49.550558   64758 cri.go:89] found id: ""
	I0804 00:18:49.550594   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.550603   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:49.550611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:49.550671   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:49.587779   64758 cri.go:89] found id: ""
	I0804 00:18:49.587810   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.587823   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:49.587831   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:49.587890   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:49.630307   64758 cri.go:89] found id: ""
	I0804 00:18:49.630333   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.630344   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:49.630352   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:49.630411   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:49.665013   64758 cri.go:89] found id: ""
	I0804 00:18:49.665046   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.665057   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:49.665064   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:49.665127   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:49.701375   64758 cri.go:89] found id: ""
	I0804 00:18:49.701401   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.701410   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:49.701415   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:49.701472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:49.737237   64758 cri.go:89] found id: ""
	I0804 00:18:49.737261   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.737269   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:49.737278   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:49.737291   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:49.790998   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:49.791033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:49.804933   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:49.804965   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:49.877997   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.878019   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:49.878035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:49.963836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:49.963872   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:47.611774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.612581   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.616560   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.827900   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.829950   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.599613   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:53.600496   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:52.506621   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:52.521482   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:52.521553   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:52.555980   64758 cri.go:89] found id: ""
	I0804 00:18:52.556010   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.556021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:52.556029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:52.556094   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:52.593088   64758 cri.go:89] found id: ""
	I0804 00:18:52.593119   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.593130   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:52.593138   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:52.593197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:52.632058   64758 cri.go:89] found id: ""
	I0804 00:18:52.632088   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.632107   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:52.632115   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:52.632177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:52.668701   64758 cri.go:89] found id: ""
	I0804 00:18:52.668730   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.668739   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:52.668745   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:52.668814   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:52.705041   64758 cri.go:89] found id: ""
	I0804 00:18:52.705068   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.705075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:52.705089   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:52.705149   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:52.743304   64758 cri.go:89] found id: ""
	I0804 00:18:52.743327   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.743335   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:52.743340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:52.743397   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:52.781020   64758 cri.go:89] found id: ""
	I0804 00:18:52.781050   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.781060   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:52.781073   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:52.781134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:52.820979   64758 cri.go:89] found id: ""
	I0804 00:18:52.821004   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.821014   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:52.821024   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:52.821042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:52.876450   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:52.876488   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:52.890529   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:52.890566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:52.960682   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:52.960710   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:52.960725   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:53.044000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:53.044040   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:55.601594   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:55.615574   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:55.615645   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:55.655116   64758 cri.go:89] found id: ""
	I0804 00:18:55.655146   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.655157   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:55.655164   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:55.655217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:55.695809   64758 cri.go:89] found id: ""
	I0804 00:18:55.695837   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.695846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:55.695851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:55.695909   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:55.732784   64758 cri.go:89] found id: ""
	I0804 00:18:55.732811   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.732822   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:55.732828   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:55.732920   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:55.773316   64758 cri.go:89] found id: ""
	I0804 00:18:55.773338   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.773347   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:55.773368   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:55.773416   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:55.808886   64758 cri.go:89] found id: ""
	I0804 00:18:55.808913   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.808924   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:55.808931   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:55.808990   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:55.848471   64758 cri.go:89] found id: ""
	I0804 00:18:55.848499   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.848507   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:55.848513   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:55.848568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:55.884088   64758 cri.go:89] found id: ""
	I0804 00:18:55.884117   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.884128   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:55.884134   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:55.884194   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:55.918194   64758 cri.go:89] found id: ""
	I0804 00:18:55.918222   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.918233   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:55.918243   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:55.918264   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:55.932685   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:55.932717   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:56.003817   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:56.003840   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:56.003856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:56.087804   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:56.087846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:56.129959   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:56.129993   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:54.111584   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.610664   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:54.327283   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.328332   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.100620   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.601669   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:00.604763   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.685077   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:58.698624   64758 kubeadm.go:597] duration metric: took 4m4.179874556s to restartPrimaryControlPlane
	W0804 00:18:58.698704   64758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:18:58.698731   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:18:58.611004   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:00.611252   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.828188   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:01.329218   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.100214   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:05.101275   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.967117   64758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.268366381s)
	I0804 00:19:03.967202   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:19:03.982098   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:19:03.991994   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:19:04.002780   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:19:04.002802   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:19:04.002845   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:19:04.012216   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:19:04.012279   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:19:04.021463   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:19:04.030689   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:19:04.030743   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:19:04.040801   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.050496   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:19:04.050558   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.060782   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:19:04.071595   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:19:04.071673   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:19:04.082123   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:19:04.313172   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:19:02.611712   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:05.111575   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.827427   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:06.327317   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:07.599775   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:09.599814   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:07.611608   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:10.110194   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:08.333681   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:10.829626   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:11.601081   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:14.099098   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:12.110388   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:14.111401   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:16.610774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:13.327035   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:15.327695   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:17.327749   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:16.100543   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:18.602723   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:20.603470   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:18.611336   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:21.111798   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:19.329120   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:21.826869   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:22.605600   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:25.101500   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:23.610581   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:25.610814   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:24.326982   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:26.827772   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:27.599557   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:29.600283   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:28.110748   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:30.111027   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:29.327031   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:31.328581   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:32.101571   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:34.601251   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:32.610784   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:34.612611   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:33.828237   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:35.828319   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:37.099717   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:39.100492   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:37.111009   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:39.610805   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:38.326730   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:40.327548   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:42.330066   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:41.600239   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:43.600686   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:45.601464   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:42.110900   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:44.610221   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:45.605124   65087 pod_ready.go:81] duration metric: took 4m0.000843677s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" ...
	E0804 00:19:45.605152   65087 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0804 00:19:45.605175   65087 pod_ready.go:38] duration metric: took 4m13.615224515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:19:45.605208   65087 kubeadm.go:597] duration metric: took 4m21.736484018s to restartPrimaryControlPlane
	W0804 00:19:45.605273   65087 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:19:45.605304   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:19:44.827547   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:47.329541   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:48.101237   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:50.603754   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:49.826561   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:51.828643   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:53.100714   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:55.102037   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:53.832996   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:54.830906   65441 pod_ready.go:81] duration metric: took 4m0.010324747s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	E0804 00:19:54.830936   65441 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0804 00:19:54.830947   65441 pod_ready.go:38] duration metric: took 4m4.842701336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:19:54.830968   65441 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:19:54.831003   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:19:54.831070   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:19:54.887772   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:54.887804   65441 cri.go:89] found id: ""
	I0804 00:19:54.887815   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:19:54.887877   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:54.892740   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:19:54.892801   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:19:54.943044   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:54.943082   65441 cri.go:89] found id: ""
	I0804 00:19:54.943092   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:19:54.943164   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:54.947699   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:19:54.947765   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:19:54.997280   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:54.997302   65441 cri.go:89] found id: ""
	I0804 00:19:54.997311   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:19:54.997380   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.005574   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:19:55.005642   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:19:55.066824   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:55.066845   65441 cri.go:89] found id: ""
	I0804 00:19:55.066852   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:19:55.066906   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.071713   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:19:55.071779   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:19:55.116381   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:55.116406   65441 cri.go:89] found id: ""
	I0804 00:19:55.116414   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:19:55.116468   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.121174   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:19:55.121237   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:19:55.168300   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:55.168323   65441 cri.go:89] found id: ""
	I0804 00:19:55.168331   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:19:55.168381   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.173450   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:19:55.173509   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:19:55.218999   65441 cri.go:89] found id: ""
	I0804 00:19:55.219030   65441 logs.go:276] 0 containers: []
	W0804 00:19:55.219041   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:19:55.219048   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:19:55.219115   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:19:55.263696   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:55.263723   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:55.263727   65441 cri.go:89] found id: ""
	I0804 00:19:55.263734   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:19:55.263789   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.269001   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.277864   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:19:55.277899   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:55.323692   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:19:55.323729   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:55.364971   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:19:55.365005   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:19:55.871942   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:19:55.871983   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:19:55.929828   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:19:55.929869   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:19:55.987389   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:19:55.987425   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:56.041330   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:19:56.041381   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:56.082524   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:19:56.082556   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:56.122545   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:19:56.122572   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:56.178249   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:19:56.178288   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:56.219273   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:19:56.219300   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:19:56.235345   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:19:56.235389   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:19:56.370660   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:19:56.370692   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:57.600248   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:00.100920   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:58.936934   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:19:58.953624   65441 api_server.go:72] duration metric: took 4m14.22488371s to wait for apiserver process to appear ...
	I0804 00:19:58.953655   65441 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:19:58.953700   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:19:58.953764   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:19:58.997408   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:58.997434   65441 cri.go:89] found id: ""
	I0804 00:19:58.997443   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:19:58.997492   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.004398   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:19:59.004466   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:19:59.041483   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:59.041510   65441 cri.go:89] found id: ""
	I0804 00:19:59.041518   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:19:59.041568   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.045754   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:19:59.045825   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:19:59.081738   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:59.081756   65441 cri.go:89] found id: ""
	I0804 00:19:59.081764   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:19:59.081809   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.086297   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:19:59.086348   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:19:59.124421   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:59.124440   65441 cri.go:89] found id: ""
	I0804 00:19:59.124447   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:19:59.124494   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.128612   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:19:59.128677   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:19:59.165702   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:59.165728   65441 cri.go:89] found id: ""
	I0804 00:19:59.165737   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:19:59.165791   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.170016   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:19:59.170103   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:19:59.205275   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:59.205299   65441 cri.go:89] found id: ""
	I0804 00:19:59.205307   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:19:59.205377   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.209637   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:19:59.209699   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:19:59.244254   65441 cri.go:89] found id: ""
	I0804 00:19:59.244281   65441 logs.go:276] 0 containers: []
	W0804 00:19:59.244290   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:19:59.244295   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:19:59.244343   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:19:59.281850   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:59.281876   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:59.281880   65441 cri.go:89] found id: ""
	I0804 00:19:59.281887   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:19:59.281935   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.286423   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.291108   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:19:59.291134   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:59.340778   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:19:59.340808   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:59.379258   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:19:59.379288   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:59.418902   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:19:59.418932   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:19:59.875668   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:19:59.875708   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:19:59.932947   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:19:59.932980   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:59.980190   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:19:59.980224   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:00.024331   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:20:00.024359   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:00.064676   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:20:00.064701   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:00.117684   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:20:00.117717   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:00.153654   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:20:00.153683   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:00.200840   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:00.200869   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:00.214380   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:00.214410   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:02.101240   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:04.600064   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:02.832546   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:20:02.837684   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 200:
	ok
	I0804 00:20:02.838736   65441 api_server.go:141] control plane version: v1.30.3
	I0804 00:20:02.838763   65441 api_server.go:131] duration metric: took 3.885096913s to wait for apiserver health ...
	I0804 00:20:02.838773   65441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:02.838798   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:02.838856   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:02.878530   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:20:02.878556   65441 cri.go:89] found id: ""
	I0804 00:20:02.878565   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:20:02.878628   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.883263   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:02.883338   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:02.921989   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:20:02.922009   65441 cri.go:89] found id: ""
	I0804 00:20:02.922017   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:20:02.922062   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.928690   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:02.928767   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:02.967469   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:20:02.967490   65441 cri.go:89] found id: ""
	I0804 00:20:02.967498   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:20:02.967544   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.972155   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:02.972217   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:03.011875   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:03.011900   65441 cri.go:89] found id: ""
	I0804 00:20:03.011910   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:20:03.011966   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.016326   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:03.016395   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:03.057114   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:03.057137   65441 cri.go:89] found id: ""
	I0804 00:20:03.057145   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:20:03.057206   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.061528   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:03.061592   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:03.101778   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:03.101832   65441 cri.go:89] found id: ""
	I0804 00:20:03.101842   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:20:03.101902   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.106292   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:03.106368   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:03.146453   65441 cri.go:89] found id: ""
	I0804 00:20:03.146484   65441 logs.go:276] 0 containers: []
	W0804 00:20:03.146496   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:03.146504   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:03.146569   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:03.185861   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:03.185884   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:20:03.185887   65441 cri.go:89] found id: ""
	I0804 00:20:03.185894   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:20:03.185941   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.190490   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.194727   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:03.194750   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:03.308015   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:20:03.308052   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:20:03.358699   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:20:03.358732   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:20:03.410398   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:20:03.410430   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:20:03.450651   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:03.450685   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:03.859092   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:20:03.859145   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:03.905500   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:20:03.905529   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:03.951014   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:03.951047   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:04.003275   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:04.003311   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:04.017574   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:20:04.017608   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:20:04.054252   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:20:04.054283   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:04.094524   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:20:04.094558   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:04.131163   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:20:04.131192   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:06.691154   65441 system_pods.go:59] 8 kube-system pods found
	I0804 00:20:06.691193   65441 system_pods.go:61] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running
	I0804 00:20:06.691199   65441 system_pods.go:61] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running
	I0804 00:20:06.691203   65441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running
	I0804 00:20:06.691209   65441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running
	I0804 00:20:06.691213   65441 system_pods.go:61] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:20:06.691218   65441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running
	I0804 00:20:06.691226   65441 system_pods.go:61] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:06.691232   65441 system_pods.go:61] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:20:06.691244   65441 system_pods.go:74] duration metric: took 3.852463199s to wait for pod list to return data ...
	I0804 00:20:06.691257   65441 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:06.693724   65441 default_sa.go:45] found service account: "default"
	I0804 00:20:06.693755   65441 default_sa.go:55] duration metric: took 2.486182ms for default service account to be created ...
	I0804 00:20:06.693767   65441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:06.698925   65441 system_pods.go:86] 8 kube-system pods found
	I0804 00:20:06.698950   65441 system_pods.go:89] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running
	I0804 00:20:06.698956   65441 system_pods.go:89] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running
	I0804 00:20:06.698962   65441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running
	I0804 00:20:06.698968   65441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running
	I0804 00:20:06.698972   65441 system_pods.go:89] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:20:06.698976   65441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running
	I0804 00:20:06.698983   65441 system_pods.go:89] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:06.698990   65441 system_pods.go:89] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:20:06.698997   65441 system_pods.go:126] duration metric: took 5.224971ms to wait for k8s-apps to be running ...
	I0804 00:20:06.699003   65441 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:06.699047   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:06.714188   65441 system_svc.go:56] duration metric: took 15.17801ms WaitForService to wait for kubelet
	I0804 00:20:06.714213   65441 kubeadm.go:582] duration metric: took 4m21.985480612s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:06.714232   65441 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:06.716717   65441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:06.716743   65441 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:06.716757   65441 node_conditions.go:105] duration metric: took 2.521245ms to run NodePressure ...
	I0804 00:20:06.716771   65441 start.go:241] waiting for startup goroutines ...
	I0804 00:20:06.716780   65441 start.go:246] waiting for cluster config update ...
	I0804 00:20:06.716796   65441 start.go:255] writing updated cluster config ...
	I0804 00:20:06.717156   65441 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:06.765983   65441 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:20:06.768482   65441 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-969068" cluster and "default" namespace by default
	I0804 00:20:06.600233   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:08.603829   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:11.852948   65087 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.247618249s)
	I0804 00:20:11.853025   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:11.870882   65087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:20:11.882005   65087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:20:11.892505   65087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:20:11.892527   65087 kubeadm.go:157] found existing configuration files:
	
	I0804 00:20:11.892570   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:20:11.902005   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:20:11.902061   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:20:11.911585   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:20:11.921837   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:20:11.921911   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:20:11.101091   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:13.607073   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:14.600605   64502 pod_ready.go:81] duration metric: took 4m0.007136508s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	E0804 00:20:14.600629   64502 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0804 00:20:14.600637   64502 pod_ready.go:38] duration metric: took 4m5.120472791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:14.600651   64502 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:20:14.600675   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:14.600717   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:14.669699   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:14.669724   64502 cri.go:89] found id: ""
	I0804 00:20:14.669733   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:14.669789   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.674907   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:14.674978   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:14.720830   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:14.720867   64502 cri.go:89] found id: ""
	I0804 00:20:14.720877   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:14.720937   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.726667   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:14.726729   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:14.778216   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:14.778247   64502 cri.go:89] found id: ""
	I0804 00:20:14.778256   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:14.778321   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.785349   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:14.785433   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:14.836381   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:14.836408   64502 cri.go:89] found id: ""
	I0804 00:20:14.836416   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:14.836475   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.841662   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:14.841752   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:14.884803   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:14.884827   64502 cri.go:89] found id: ""
	I0804 00:20:14.884836   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:14.884904   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.890625   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:14.890696   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:14.942713   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:14.942732   64502 cri.go:89] found id: ""
	I0804 00:20:14.942739   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:14.942800   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.948335   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:14.948391   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:14.994869   64502 cri.go:89] found id: ""
	I0804 00:20:14.994900   64502 logs.go:276] 0 containers: []
	W0804 00:20:14.994910   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:14.994917   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:14.994985   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:15.034528   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:15.034557   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:15.034564   64502 cri.go:89] found id: ""
	I0804 00:20:15.034574   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:15.034633   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:15.039335   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:15.044600   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:15.044625   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:15.091365   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:15.091398   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:15.144896   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:15.144924   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:15.675849   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:15.675901   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:15.691640   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:15.691699   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:11.931844   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:20:11.941369   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:20:11.941430   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:20:11.951279   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:20:11.961201   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:20:11.961275   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:20:11.972150   65087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:20:12.024567   65087 kubeadm.go:310] W0804 00:20:12.001791    2996 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0804 00:20:12.025287   65087 kubeadm.go:310] W0804 00:20:12.002530    2996 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0804 00:20:12.154034   65087 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:20:20.358593   65087 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0804 00:20:20.358649   65087 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:20:20.358721   65087 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:20:20.358834   65087 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:20:20.358953   65087 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 00:20:20.359013   65087 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:20:20.360498   65087 out.go:204]   - Generating certificates and keys ...
	I0804 00:20:20.360590   65087 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:20:20.360692   65087 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:20:20.360767   65087 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:20:20.360821   65087 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:20:20.360915   65087 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:20:20.360971   65087 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:20:20.361042   65087 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:20:20.361124   65087 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:20:20.361228   65087 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:20:20.361307   65087 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:20:20.361342   65087 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:20:20.361436   65087 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:20:20.361523   65087 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:20:20.361592   65087 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 00:20:20.361642   65087 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:20:20.361698   65087 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:20:20.361746   65087 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:20:20.361815   65087 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:20:20.361881   65087 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:20:20.363214   65087 out.go:204]   - Booting up control plane ...
	I0804 00:20:20.363312   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:20:20.363381   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:20:20.363450   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:20:20.363541   65087 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:20:20.363628   65087 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:20:20.363678   65087 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:20:20.363790   65087 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 00:20:20.363889   65087 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 00:20:20.363960   65087 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.009132208s
	I0804 00:20:20.364044   65087 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 00:20:20.364094   65087 kubeadm.go:310] [api-check] The API server is healthy after 4.501833932s
	I0804 00:20:20.364201   65087 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 00:20:20.364321   65087 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 00:20:20.364397   65087 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 00:20:20.364585   65087 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-118016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 00:20:20.364634   65087 kubeadm.go:310] [bootstrap-token] Using token: bbnfwa.jorg7huedw5cbtk2
	I0804 00:20:20.366569   65087 out.go:204]   - Configuring RBAC rules ...
	I0804 00:20:20.366705   65087 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 00:20:20.366823   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 00:20:20.366979   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 00:20:20.367160   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 00:20:20.367275   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 00:20:20.367352   65087 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 00:20:20.367447   65087 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 00:20:20.367510   65087 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 00:20:20.367574   65087 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 00:20:20.367580   65087 kubeadm.go:310] 
	I0804 00:20:20.367629   65087 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 00:20:20.367635   65087 kubeadm.go:310] 
	I0804 00:20:20.367697   65087 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 00:20:20.367703   65087 kubeadm.go:310] 
	I0804 00:20:20.367724   65087 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 00:20:20.367784   65087 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 00:20:20.367828   65087 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 00:20:20.367834   65087 kubeadm.go:310] 
	I0804 00:20:20.367886   65087 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 00:20:20.367903   65087 kubeadm.go:310] 
	I0804 00:20:20.367971   65087 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 00:20:20.367981   65087 kubeadm.go:310] 
	I0804 00:20:20.368050   65087 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 00:20:20.368125   65087 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 00:20:20.368187   65087 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 00:20:20.368193   65087 kubeadm.go:310] 
	I0804 00:20:20.368262   65087 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 00:20:20.368353   65087 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 00:20:20.368367   65087 kubeadm.go:310] 
	I0804 00:20:20.368480   65087 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bbnfwa.jorg7huedw5cbtk2 \
	I0804 00:20:20.368588   65087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0804 00:20:20.368614   65087 kubeadm.go:310] 	--control-plane 
	I0804 00:20:20.368621   65087 kubeadm.go:310] 
	I0804 00:20:20.368705   65087 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 00:20:20.368712   65087 kubeadm.go:310] 
	I0804 00:20:20.368810   65087 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bbnfwa.jorg7huedw5cbtk2 \
	I0804 00:20:20.368933   65087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0804 00:20:20.368947   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:20:20.368957   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:20:20.370303   65087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:20:15.859131   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:15.859169   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:15.917686   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:15.917726   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:15.964285   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:15.964328   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:16.019646   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:16.019679   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:16.069379   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:16.069416   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:16.129752   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:16.129842   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:16.177015   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:16.177043   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:16.220526   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:16.220560   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:18.771509   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:20:18.793252   64502 api_server.go:72] duration metric: took 4m15.042389156s to wait for apiserver process to appear ...
	I0804 00:20:18.793291   64502 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:20:18.793334   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:18.793415   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:18.839339   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:18.839363   64502 cri.go:89] found id: ""
	I0804 00:20:18.839372   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:18.839432   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.843932   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:18.844005   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:18.894398   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:18.894422   64502 cri.go:89] found id: ""
	I0804 00:20:18.894432   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:18.894491   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.899596   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:18.899664   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:18.947077   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:18.947106   64502 cri.go:89] found id: ""
	I0804 00:20:18.947114   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:18.947168   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.952349   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:18.952431   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:18.999336   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:18.999361   64502 cri.go:89] found id: ""
	I0804 00:20:18.999377   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:18.999441   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.005419   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:19.005502   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:19.061388   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:19.061413   64502 cri.go:89] found id: ""
	I0804 00:20:19.061422   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:19.061476   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.066071   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:19.066139   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:19.111849   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:19.111872   64502 cri.go:89] found id: ""
	I0804 00:20:19.111879   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:19.111929   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.116272   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:19.116323   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:19.157387   64502 cri.go:89] found id: ""
	I0804 00:20:19.157414   64502 logs.go:276] 0 containers: []
	W0804 00:20:19.157423   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:19.157431   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:19.157493   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:19.199627   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:19.199654   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:19.199660   64502 cri.go:89] found id: ""
	I0804 00:20:19.199669   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:19.199727   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.204317   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.208707   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:19.208729   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:19.261684   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:19.261717   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:19.309861   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:19.309890   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:19.349376   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:19.349403   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:19.388561   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:19.388590   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:19.466119   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:19.466163   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:19.515539   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:19.515575   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:19.561529   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:19.561556   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:19.626188   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:19.626219   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:19.640348   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:19.640372   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:19.772397   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:19.772439   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:19.827217   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:19.827260   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:20.306543   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:20.306589   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:20.371388   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:20:20.384738   65087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:20:20.404547   65087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:20:20.404607   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:20.404659   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-118016 minikube.k8s.io/updated_at=2024_08_04T00_20_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=no-preload-118016 minikube.k8s.io/primary=true
	I0804 00:20:20.602476   65087 ops.go:34] apiserver oom_adj: -16
	I0804 00:20:20.602551   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:21.103011   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:21.602888   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:22.102779   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:22.603282   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:23.103337   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:23.603522   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.103510   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.603474   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.689895   65087 kubeadm.go:1113] duration metric: took 4.285337247s to wait for elevateKubeSystemPrivileges
	I0804 00:20:24.689931   65087 kubeadm.go:394] duration metric: took 5m0.881315877s to StartCluster
	I0804 00:20:24.689947   65087 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:24.690018   65087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:20:24.691559   65087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:24.691784   65087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:20:24.691848   65087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:20:24.691963   65087 addons.go:69] Setting storage-provisioner=true in profile "no-preload-118016"
	I0804 00:20:24.691977   65087 addons.go:69] Setting default-storageclass=true in profile "no-preload-118016"
	I0804 00:20:24.691999   65087 addons.go:234] Setting addon storage-provisioner=true in "no-preload-118016"
	I0804 00:20:24.692001   65087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-118016"
	I0804 00:20:24.692001   65087 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:20:24.692018   65087 addons.go:69] Setting metrics-server=true in profile "no-preload-118016"
	W0804 00:20:24.692007   65087 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:20:24.692068   65087 addons.go:234] Setting addon metrics-server=true in "no-preload-118016"
	I0804 00:20:24.692086   65087 host.go:66] Checking if "no-preload-118016" exists ...
	W0804 00:20:24.692099   65087 addons.go:243] addon metrics-server should already be in state true
	I0804 00:20:24.692142   65087 host.go:66] Checking if "no-preload-118016" exists ...
	I0804 00:20:24.692440   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692464   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.692494   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692441   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692517   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.692566   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.693590   65087 out.go:177] * Verifying Kubernetes components...
	I0804 00:20:24.695139   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:20:24.708841   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0804 00:20:24.709324   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.709911   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.709937   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.710305   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.710594   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.712827   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0804 00:20:24.712894   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0804 00:20:24.713349   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.713375   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.713884   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.713899   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.713923   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.713942   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.714211   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.714264   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.714421   65087 addons.go:234] Setting addon default-storageclass=true in "no-preload-118016"
	W0804 00:20:24.714440   65087 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:20:24.714468   65087 host.go:66] Checking if "no-preload-118016" exists ...
	I0804 00:20:24.714605   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.714623   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.714801   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.714846   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.714993   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.715014   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.730476   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0804 00:20:24.730811   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36995
	I0804 00:20:24.730912   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.731145   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.731470   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.731486   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.731733   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.731748   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.731808   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.732034   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.732123   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.732294   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.733677   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0804 00:20:24.734185   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.734257   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.734306   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.734689   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.734709   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.735090   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.735618   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.735643   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.736977   65087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:20:24.736979   65087 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:20:22.853589   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:20:22.859439   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0804 00:20:22.860503   64502 api_server.go:141] control plane version: v1.30.3
	I0804 00:20:22.860521   64502 api_server.go:131] duration metric: took 4.067223392s to wait for apiserver health ...
	I0804 00:20:22.860528   64502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:22.860550   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:22.860598   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:22.901174   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:22.901193   64502 cri.go:89] found id: ""
	I0804 00:20:22.901200   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:22.901246   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.905319   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:22.905404   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:22.948354   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:22.948378   64502 cri.go:89] found id: ""
	I0804 00:20:22.948387   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:22.948438   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.952776   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:22.952863   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:22.989339   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:22.989376   64502 cri.go:89] found id: ""
	I0804 00:20:22.989385   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:22.989443   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.993833   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:22.993909   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:23.035367   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:23.035385   64502 cri.go:89] found id: ""
	I0804 00:20:23.035392   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:23.035434   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.040184   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:23.040259   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:23.078508   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:23.078529   64502 cri.go:89] found id: ""
	I0804 00:20:23.078538   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:23.078601   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.082907   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:23.082969   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:23.120846   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:23.120870   64502 cri.go:89] found id: ""
	I0804 00:20:23.120880   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:23.120943   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.125641   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:23.125702   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:23.172188   64502 cri.go:89] found id: ""
	I0804 00:20:23.172212   64502 logs.go:276] 0 containers: []
	W0804 00:20:23.172224   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:23.172232   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:23.172297   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:23.218188   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:23.218207   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:23.218211   64502 cri.go:89] found id: ""
	I0804 00:20:23.218217   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:23.218268   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.222562   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.226965   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:23.226989   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:23.269384   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:23.269414   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:23.309148   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:23.309178   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:23.356908   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:23.356936   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:23.395352   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:23.395381   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:23.450901   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:23.450925   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:23.488908   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:23.488945   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:23.551780   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:23.551808   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:23.975030   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:23.975070   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:24.035464   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:24.035497   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:24.053118   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:24.053148   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:24.197157   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:24.197189   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:24.254049   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:24.254083   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:24.738735   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:20:24.738757   65087 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:20:24.738785   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.738836   65087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:20:24.738847   65087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:20:24.738860   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.742131   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742459   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742539   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.742569   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742690   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.742968   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.743009   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.743254   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.743142   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.743174   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.743387   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.743447   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.743590   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.743720   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.754983   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0804 00:20:24.755436   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.755877   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.755901   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.756229   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.756454   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.758285   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.758520   65087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:20:24.758537   65087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:20:24.758555   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.761268   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.761715   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.761739   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.762001   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.762211   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.762402   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.762593   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.942323   65087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:20:24.971293   65087 node_ready.go:35] waiting up to 6m0s for node "no-preload-118016" to be "Ready" ...
	I0804 00:20:24.991406   65087 node_ready.go:49] node "no-preload-118016" has status "Ready":"True"
	I0804 00:20:24.991428   65087 node_ready.go:38] duration metric: took 20.101499ms for node "no-preload-118016" to be "Ready" ...
	I0804 00:20:24.991436   65087 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:25.004484   65087 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:25.069407   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:20:25.069437   65087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:20:25.093645   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:20:25.178590   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:20:25.178615   65087 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:20:25.246634   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:20:25.272880   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:20:25.272916   65087 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:20:25.368517   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:20:25.442382   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.442406   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.442668   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:25.442711   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.442717   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.442726   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.442732   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.444425   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.444456   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.444463   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:25.451275   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.451298   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.451605   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.451620   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.451617   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218075   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.218105   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.218391   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.218416   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.218427   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.218435   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.218440   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218702   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218764   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.218786   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.671629   65087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.303057537s)
	I0804 00:20:26.671683   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.671702   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.672010   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.672031   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.672041   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.672049   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.672327   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.672363   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.672378   65087 addons.go:475] Verifying addon metrics-server=true in "no-preload-118016"
	I0804 00:20:26.674374   65087 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0804 00:20:26.803868   64502 system_pods.go:59] 8 kube-system pods found
	I0804 00:20:26.803909   64502 system_pods.go:61] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running
	I0804 00:20:26.803917   64502 system_pods.go:61] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running
	I0804 00:20:26.803923   64502 system_pods.go:61] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running
	I0804 00:20:26.803928   64502 system_pods.go:61] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running
	I0804 00:20:26.803934   64502 system_pods.go:61] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:20:26.803940   64502 system_pods.go:61] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running
	I0804 00:20:26.803948   64502 system_pods.go:61] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:26.803957   64502 system_pods.go:61] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:20:26.803966   64502 system_pods.go:74] duration metric: took 3.943432992s to wait for pod list to return data ...
	I0804 00:20:26.803978   64502 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:26.808760   64502 default_sa.go:45] found service account: "default"
	I0804 00:20:26.808786   64502 default_sa.go:55] duration metric: took 4.797226ms for default service account to be created ...
	I0804 00:20:26.808796   64502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:26.814721   64502 system_pods.go:86] 8 kube-system pods found
	I0804 00:20:26.814753   64502 system_pods.go:89] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running
	I0804 00:20:26.814761   64502 system_pods.go:89] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running
	I0804 00:20:26.814768   64502 system_pods.go:89] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running
	I0804 00:20:26.814774   64502 system_pods.go:89] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running
	I0804 00:20:26.814780   64502 system_pods.go:89] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:20:26.814787   64502 system_pods.go:89] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running
	I0804 00:20:26.814798   64502 system_pods.go:89] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:26.814807   64502 system_pods.go:89] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:20:26.814819   64502 system_pods.go:126] duration metric: took 6.01558ms to wait for k8s-apps to be running ...
	I0804 00:20:26.814828   64502 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:26.814894   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:26.837462   64502 system_svc.go:56] duration metric: took 22.624089ms WaitForService to wait for kubelet
	I0804 00:20:26.837494   64502 kubeadm.go:582] duration metric: took 4m23.086636256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:26.837522   64502 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:26.841517   64502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:26.841548   64502 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:26.841563   64502 node_conditions.go:105] duration metric: took 4.034693ms to run NodePressure ...
	I0804 00:20:26.841576   64502 start.go:241] waiting for startup goroutines ...
	I0804 00:20:26.841586   64502 start.go:246] waiting for cluster config update ...
	I0804 00:20:26.841600   64502 start.go:255] writing updated cluster config ...
	I0804 00:20:26.841939   64502 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:26.908142   64502 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:20:26.910191   64502 out.go:177] * Done! kubectl is now configured to use "embed-certs-877598" cluster and "default" namespace by default
	I0804 00:20:26.675679   65087 addons.go:510] duration metric: took 1.98382947s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0804 00:20:27.012226   65087 pod_ready.go:102] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:29.511886   65087 pod_ready.go:102] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:32.011000   65087 pod_ready.go:92] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:32.011021   65087 pod_ready.go:81] duration metric: took 7.006508451s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:32.011031   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.518235   65087 pod_ready.go:92] pod "kube-apiserver-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.518260   65087 pod_ready.go:81] duration metric: took 1.507219524s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.518270   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.522894   65087 pod_ready.go:92] pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.522916   65087 pod_ready.go:81] duration metric: took 4.639763ms for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.522928   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4jqng" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.527271   65087 pod_ready.go:92] pod "kube-proxy-4jqng" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.527291   65087 pod_ready.go:81] duration metric: took 4.353851ms for pod "kube-proxy-4jqng" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.527303   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.531405   65087 pod_ready.go:92] pod "kube-scheduler-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.531424   65087 pod_ready.go:81] duration metric: took 4.113418ms for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.531433   65087 pod_ready.go:38] duration metric: took 8.539987559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:33.531449   65087 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:20:33.531505   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:20:33.546783   65087 api_server.go:72] duration metric: took 8.854972636s to wait for apiserver process to appear ...
	I0804 00:20:33.546813   65087 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:20:33.546832   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:20:33.551131   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0804 00:20:33.552092   65087 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:20:33.552112   65087 api_server.go:131] duration metric: took 5.292367ms to wait for apiserver health ...
	I0804 00:20:33.552119   65087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:33.557965   65087 system_pods.go:59] 9 kube-system pods found
	I0804 00:20:33.557987   65087 system_pods.go:61] "coredns-6f6b679f8f-gg97s" [28bfbbe9-5051-4674-8b43-f07bfdbc6916] Running
	I0804 00:20:33.557995   65087 system_pods.go:61] "coredns-6f6b679f8f-lj494" [74baae1c-e4c4-4125-aa9d-aeaac74a6ecd] Running
	I0804 00:20:33.558000   65087 system_pods.go:61] "etcd-no-preload-118016" [19ff6386-b0c0-41f7-89fa-fd62e8698b05] Running
	I0804 00:20:33.558005   65087 system_pods.go:61] "kube-apiserver-no-preload-118016" [d791bfcb-00d1-47b8-a9c2-ac8e68af4062] Running
	I0804 00:20:33.558009   65087 system_pods.go:61] "kube-controller-manager-no-preload-118016" [cef9e6fa-7a9d-4d84-8693-216d2eeab428] Running
	I0804 00:20:33.558014   65087 system_pods.go:61] "kube-proxy-4jqng" [c254599f-e58d-4d0a-81c9-1c98c0341f26] Running
	I0804 00:20:33.558018   65087 system_pods.go:61] "kube-scheduler-no-preload-118016" [0deea66f-2336-4371-9492-5af84f3f0fe8] Running
	I0804 00:20:33.558026   65087 system_pods.go:61] "metrics-server-6867b74b74-9gw27" [2f3cdf21-9e68-49b9-a6e0-927465738f23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:33.558035   65087 system_pods.go:61] "storage-provisioner" [07fdb5fa-a2e9-4d3d-8149-25720c320d51] Running
	I0804 00:20:33.558045   65087 system_pods.go:74] duration metric: took 5.921154ms to wait for pod list to return data ...
	I0804 00:20:33.558057   65087 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:33.608139   65087 default_sa.go:45] found service account: "default"
	I0804 00:20:33.608164   65087 default_sa.go:55] duration metric: took 50.097877ms for default service account to be created ...
	I0804 00:20:33.608174   65087 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:33.811878   65087 system_pods.go:86] 9 kube-system pods found
	I0804 00:20:33.811906   65087 system_pods.go:89] "coredns-6f6b679f8f-gg97s" [28bfbbe9-5051-4674-8b43-f07bfdbc6916] Running
	I0804 00:20:33.811912   65087 system_pods.go:89] "coredns-6f6b679f8f-lj494" [74baae1c-e4c4-4125-aa9d-aeaac74a6ecd] Running
	I0804 00:20:33.811916   65087 system_pods.go:89] "etcd-no-preload-118016" [19ff6386-b0c0-41f7-89fa-fd62e8698b05] Running
	I0804 00:20:33.811920   65087 system_pods.go:89] "kube-apiserver-no-preload-118016" [d791bfcb-00d1-47b8-a9c2-ac8e68af4062] Running
	I0804 00:20:33.811925   65087 system_pods.go:89] "kube-controller-manager-no-preload-118016" [cef9e6fa-7a9d-4d84-8693-216d2eeab428] Running
	I0804 00:20:33.811928   65087 system_pods.go:89] "kube-proxy-4jqng" [c254599f-e58d-4d0a-81c9-1c98c0341f26] Running
	I0804 00:20:33.811932   65087 system_pods.go:89] "kube-scheduler-no-preload-118016" [0deea66f-2336-4371-9492-5af84f3f0fe8] Running
	I0804 00:20:33.811939   65087 system_pods.go:89] "metrics-server-6867b74b74-9gw27" [2f3cdf21-9e68-49b9-a6e0-927465738f23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:33.811943   65087 system_pods.go:89] "storage-provisioner" [07fdb5fa-a2e9-4d3d-8149-25720c320d51] Running
	I0804 00:20:33.811950   65087 system_pods.go:126] duration metric: took 203.770479ms to wait for k8s-apps to be running ...
	I0804 00:20:33.811957   65087 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:33.812000   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:33.827146   65087 system_svc.go:56] duration metric: took 15.17867ms WaitForService to wait for kubelet
	I0804 00:20:33.827176   65087 kubeadm.go:582] duration metric: took 9.135367695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:33.827199   65087 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:34.009032   65087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:34.009056   65087 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:34.009076   65087 node_conditions.go:105] duration metric: took 181.872031ms to run NodePressure ...
	I0804 00:20:34.009086   65087 start.go:241] waiting for startup goroutines ...
	I0804 00:20:34.009112   65087 start.go:246] waiting for cluster config update ...
	I0804 00:20:34.009128   65087 start.go:255] writing updated cluster config ...
	I0804 00:20:34.009423   65087 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:34.059796   65087 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0804 00:20:34.061903   65087 out.go:177] * Done! kubectl is now configured to use "no-preload-118016" cluster and "default" namespace by default
	I0804 00:21:00.664979   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:21:00.665100   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:21:00.666810   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:00.666904   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:00.667020   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:00.667150   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:00.667278   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:00.667370   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:00.670254   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:00.670337   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:00.670431   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:00.670537   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:00.670623   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:00.670721   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:00.670788   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:00.670883   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:00.670990   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:00.671079   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:00.671168   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:00.671217   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:00.671265   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:00.671359   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:00.671442   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:00.671529   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:00.671611   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:00.671756   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:00.671856   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:00.671888   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:00.671940   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:00.673410   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:00.673506   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:00.673573   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:00.673627   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:00.673692   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:00.673828   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:00.673876   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:00.673972   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674207   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674283   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674517   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674590   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674752   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674851   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675053   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675173   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675451   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675463   64758 kubeadm.go:310] 
	I0804 00:21:00.675511   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:21:00.675561   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:21:00.675571   64758 kubeadm.go:310] 
	I0804 00:21:00.675614   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:21:00.675656   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:21:00.675787   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:21:00.675797   64758 kubeadm.go:310] 
	I0804 00:21:00.675928   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:21:00.675970   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:21:00.676009   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:21:00.676026   64758 kubeadm.go:310] 
	I0804 00:21:00.676172   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:21:00.676278   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:21:00.676289   64758 kubeadm.go:310] 
	I0804 00:21:00.676393   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:21:00.676466   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:21:00.676532   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:21:00.676609   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:21:00.676632   64758 kubeadm.go:310] 
	W0804 00:21:00.676723   64758 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 00:21:00.676765   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:21:01.138781   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:21:01.154405   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:21:01.164426   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:21:01.164445   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:21:01.164496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:21:01.173853   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:21:01.173907   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:21:01.183634   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:21:01.193283   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:21:01.193342   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:21:01.202427   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.212186   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:21:01.212235   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.222754   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:21:01.232996   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:21:01.233059   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:21:01.243778   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:21:01.319895   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:01.319975   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:01.474907   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:01.475029   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:01.475119   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:01.683624   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:01.685482   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:01.685584   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:01.685691   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:01.685792   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:01.685880   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:01.685991   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:01.686067   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:01.686147   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:01.686285   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:01.686399   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:01.686513   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:01.686600   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:01.686670   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:01.985613   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:02.088377   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:02.336621   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:02.448654   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:02.470140   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:02.471390   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:02.471456   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:02.610840   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:02.612641   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:02.612744   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:02.627044   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:02.629019   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:02.630430   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:02.633022   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:42.635581   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:42.635740   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:42.636036   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:47.636656   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:47.636879   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:57.637900   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:57.638098   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:17.638425   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:17.638634   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637807   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:57.637988   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637996   64758 kubeadm.go:310] 
	I0804 00:22:57.638035   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:22:57.638079   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:22:57.638085   64758 kubeadm.go:310] 
	I0804 00:22:57.638118   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:22:57.638148   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:22:57.638288   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:22:57.638309   64758 kubeadm.go:310] 
	I0804 00:22:57.638426   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:22:57.638507   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:22:57.638619   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:22:57.638640   64758 kubeadm.go:310] 
	I0804 00:22:57.638829   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:22:57.638944   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:22:57.638959   64758 kubeadm.go:310] 
	I0804 00:22:57.639107   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:22:57.639191   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:22:57.639300   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:22:57.639399   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:22:57.639412   64758 kubeadm.go:310] 
	I0804 00:22:57.639782   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:22:57.639904   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:22:57.640012   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:22:57.640091   64758 kubeadm.go:394] duration metric: took 8m3.172057529s to StartCluster
	I0804 00:22:57.640138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:22:57.640202   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:22:57.684020   64758 cri.go:89] found id: ""
	I0804 00:22:57.684054   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.684064   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:22:57.684072   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:22:57.684134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:22:57.722756   64758 cri.go:89] found id: ""
	I0804 00:22:57.722780   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.722788   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:22:57.722793   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:22:57.722851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:22:57.760371   64758 cri.go:89] found id: ""
	I0804 00:22:57.760400   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.760412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:22:57.760419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:22:57.760476   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:22:57.796114   64758 cri.go:89] found id: ""
	I0804 00:22:57.796144   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.796155   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:22:57.796162   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:22:57.796211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:22:57.842148   64758 cri.go:89] found id: ""
	I0804 00:22:57.842179   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.842191   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:22:57.842198   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:22:57.842286   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:22:57.914193   64758 cri.go:89] found id: ""
	I0804 00:22:57.914218   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.914229   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:22:57.914236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:22:57.914290   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:22:57.965944   64758 cri.go:89] found id: ""
	I0804 00:22:57.965973   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.965984   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:22:57.965991   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:22:57.966063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:22:58.003016   64758 cri.go:89] found id: ""
	I0804 00:22:58.003044   64758 logs.go:276] 0 containers: []
	W0804 00:22:58.003055   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:22:58.003066   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:22:58.003093   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:22:58.017277   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:22:58.017304   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:22:58.094192   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:22:58.094214   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:22:58.094227   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:22:58.210901   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:22:58.210944   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:22:58.249283   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:22:58.249317   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 00:22:58.300998   64758 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0804 00:22:58.301054   64758 out.go:239] * 
	W0804 00:22:58.301115   64758 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.301137   64758 out.go:239] * 
	W0804 00:22:58.301978   64758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:22:58.305305   64758 out.go:177] 
	W0804 00:22:58.306722   64758 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.306816   64758 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0804 00:22:58.306848   64758 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0804 00:22:58.308372   64758 out.go:177] 
	
	
	==> CRI-O <==
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.194537854Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731376194509436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a440809b-1b10-4522-9deb-90ddb3ed32a1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.195303346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d6ebc50-dc3a-45ac-be79-daa44d68ed06 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.195376676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d6ebc50-dc3a-45ac-be79-daa44d68ed06 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.195651898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f59b89753e4a597652b4a53f13a681bada0e2629949903713f9344c0c937af6,PodSandboxId:c6f8edd0330f76cc19d763ba486d668768da75a8d36dd24246448b4cc0535cd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730826747654685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fdb5fa-a2e9-4d3d-8149-25720c320d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c65ae645171e407b2813910f0e54146ab1831f0f9130a3a26eec8eaa4ca14f,PodSandboxId:09234c4c7f59230ce583f38b939f8686dbfed08d095901f1150c18bc7fc80621,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826313016176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gg97s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bfbbe9-5051-4674-8b43-f07bfdbc6916,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28455521ad2095446bf081c4170d00e6ccec27f14f667f77e7b788fc2c51c6d2,PodSandboxId:91c2813710adc1ab2d52544e83a4889b923fd40b7448f6fc6a7b6a03b5e9de75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826176482368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lj494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74
baae1c-e4c4-4125-aa9d-aeaac74a6ecd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f25ada05becf94b2699f197afd4b70de6b3211b94217bca2f9b51c476e439b,PodSandboxId:5a3f3a2b20c1b39eb6e6177730d17de94c11462651f8b4da1e43c114f3c79bd6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722730825465105468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4jqng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c254599f-e58d-4d0a-81c9-1c98c0341f26,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9d8868414e3c960bb075bcd99e870e5828071367bb340a06e1dd084313253e,PodSandboxId:a888bbecc6e161ad4f5b5de5ec1dcd8e118d9a0a993576132ee45678a7c0bca6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722730814777255937,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc454595d6fcef8957e46922bb70e36,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea380b4ed6c57516e60c5a8b184a4c25ab0858bcb97c92879ae7acc4bbb3a438,PodSandboxId:7cdbbde7b11a7ef3dff5f02ed5b0c8db247f7ef90f8d8d7af2d4a8470334e28a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722730814708925486,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da969ee0e5a26bedb12514b48871ff47a91af2aacfe243d16021051a9fb1ae8a,PodSandboxId:fbdb61c5a4c0448f4d6eedd48644d25a1adc35f919d5ff446d45af8860473314,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722730814630099645,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd018487a3261c4ff0832bfaea361607,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc100fbc7b932603c43e7d427ec410b9665bb63a0ae251fe56cc1d0233bf678,PodSandboxId:f879716552436bdf74db0a2e7aae72ef3c68ae4cd823112e19dcdd05fdb3bd0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722730814546681246,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2457feb0f26f34eb0d94a1a245199e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b8d7537c15e948e9393b9c0ae8eec10ec494dce9c1d9a41a5e3a904c7c0d8f,PodSandboxId:153678d85c5371837fc6f46f100b86ac02da29e1a92d04af5517e9a4b209245c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722730526193836406,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d6ebc50-dc3a-45ac-be79-daa44d68ed06 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.238320075Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06d38752-5f50-4671-8c8c-fc8b99b63f66 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.238417167Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06d38752-5f50-4671-8c8c-fc8b99b63f66 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.240011116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=461fe690-15b2-4855-aa30-0520017653d8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.240465602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731376240441272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=461fe690-15b2-4855-aa30-0520017653d8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.241232519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb35ed17-94a3-42bf-92cf-012dbc37627b name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.241311243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb35ed17-94a3-42bf-92cf-012dbc37627b name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.241623488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f59b89753e4a597652b4a53f13a681bada0e2629949903713f9344c0c937af6,PodSandboxId:c6f8edd0330f76cc19d763ba486d668768da75a8d36dd24246448b4cc0535cd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730826747654685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fdb5fa-a2e9-4d3d-8149-25720c320d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c65ae645171e407b2813910f0e54146ab1831f0f9130a3a26eec8eaa4ca14f,PodSandboxId:09234c4c7f59230ce583f38b939f8686dbfed08d095901f1150c18bc7fc80621,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826313016176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gg97s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bfbbe9-5051-4674-8b43-f07bfdbc6916,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28455521ad2095446bf081c4170d00e6ccec27f14f667f77e7b788fc2c51c6d2,PodSandboxId:91c2813710adc1ab2d52544e83a4889b923fd40b7448f6fc6a7b6a03b5e9de75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826176482368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lj494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74
baae1c-e4c4-4125-aa9d-aeaac74a6ecd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f25ada05becf94b2699f197afd4b70de6b3211b94217bca2f9b51c476e439b,PodSandboxId:5a3f3a2b20c1b39eb6e6177730d17de94c11462651f8b4da1e43c114f3c79bd6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722730825465105468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4jqng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c254599f-e58d-4d0a-81c9-1c98c0341f26,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9d8868414e3c960bb075bcd99e870e5828071367bb340a06e1dd084313253e,PodSandboxId:a888bbecc6e161ad4f5b5de5ec1dcd8e118d9a0a993576132ee45678a7c0bca6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722730814777255937,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc454595d6fcef8957e46922bb70e36,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea380b4ed6c57516e60c5a8b184a4c25ab0858bcb97c92879ae7acc4bbb3a438,PodSandboxId:7cdbbde7b11a7ef3dff5f02ed5b0c8db247f7ef90f8d8d7af2d4a8470334e28a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722730814708925486,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da969ee0e5a26bedb12514b48871ff47a91af2aacfe243d16021051a9fb1ae8a,PodSandboxId:fbdb61c5a4c0448f4d6eedd48644d25a1adc35f919d5ff446d45af8860473314,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722730814630099645,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd018487a3261c4ff0832bfaea361607,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc100fbc7b932603c43e7d427ec410b9665bb63a0ae251fe56cc1d0233bf678,PodSandboxId:f879716552436bdf74db0a2e7aae72ef3c68ae4cd823112e19dcdd05fdb3bd0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722730814546681246,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2457feb0f26f34eb0d94a1a245199e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b8d7537c15e948e9393b9c0ae8eec10ec494dce9c1d9a41a5e3a904c7c0d8f,PodSandboxId:153678d85c5371837fc6f46f100b86ac02da29e1a92d04af5517e9a4b209245c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722730526193836406,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb35ed17-94a3-42bf-92cf-012dbc37627b name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.290689598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81d18485-3aa0-498e-9a7b-ecb2d7f3b0dd name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.290864545Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81d18485-3aa0-498e-9a7b-ecb2d7f3b0dd name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.292221387Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=023f07fa-f27b-4474-8064-bce14f6994d8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.292664300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731376292640481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=023f07fa-f27b-4474-8064-bce14f6994d8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.293456527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58a96200-0858-454b-9090-029a3e31ed76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.293561163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58a96200-0858-454b-9090-029a3e31ed76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.293927927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f59b89753e4a597652b4a53f13a681bada0e2629949903713f9344c0c937af6,PodSandboxId:c6f8edd0330f76cc19d763ba486d668768da75a8d36dd24246448b4cc0535cd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730826747654685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fdb5fa-a2e9-4d3d-8149-25720c320d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c65ae645171e407b2813910f0e54146ab1831f0f9130a3a26eec8eaa4ca14f,PodSandboxId:09234c4c7f59230ce583f38b939f8686dbfed08d095901f1150c18bc7fc80621,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826313016176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gg97s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bfbbe9-5051-4674-8b43-f07bfdbc6916,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28455521ad2095446bf081c4170d00e6ccec27f14f667f77e7b788fc2c51c6d2,PodSandboxId:91c2813710adc1ab2d52544e83a4889b923fd40b7448f6fc6a7b6a03b5e9de75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826176482368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lj494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74
baae1c-e4c4-4125-aa9d-aeaac74a6ecd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f25ada05becf94b2699f197afd4b70de6b3211b94217bca2f9b51c476e439b,PodSandboxId:5a3f3a2b20c1b39eb6e6177730d17de94c11462651f8b4da1e43c114f3c79bd6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722730825465105468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4jqng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c254599f-e58d-4d0a-81c9-1c98c0341f26,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9d8868414e3c960bb075bcd99e870e5828071367bb340a06e1dd084313253e,PodSandboxId:a888bbecc6e161ad4f5b5de5ec1dcd8e118d9a0a993576132ee45678a7c0bca6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722730814777255937,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc454595d6fcef8957e46922bb70e36,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea380b4ed6c57516e60c5a8b184a4c25ab0858bcb97c92879ae7acc4bbb3a438,PodSandboxId:7cdbbde7b11a7ef3dff5f02ed5b0c8db247f7ef90f8d8d7af2d4a8470334e28a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722730814708925486,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da969ee0e5a26bedb12514b48871ff47a91af2aacfe243d16021051a9fb1ae8a,PodSandboxId:fbdb61c5a4c0448f4d6eedd48644d25a1adc35f919d5ff446d45af8860473314,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722730814630099645,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd018487a3261c4ff0832bfaea361607,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc100fbc7b932603c43e7d427ec410b9665bb63a0ae251fe56cc1d0233bf678,PodSandboxId:f879716552436bdf74db0a2e7aae72ef3c68ae4cd823112e19dcdd05fdb3bd0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722730814546681246,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2457feb0f26f34eb0d94a1a245199e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b8d7537c15e948e9393b9c0ae8eec10ec494dce9c1d9a41a5e3a904c7c0d8f,PodSandboxId:153678d85c5371837fc6f46f100b86ac02da29e1a92d04af5517e9a4b209245c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722730526193836406,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58a96200-0858-454b-9090-029a3e31ed76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.341621309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e4c6762-47c7-41b2-9260-d2c29f346f91 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.341790788Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e4c6762-47c7-41b2-9260-d2c29f346f91 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.342811527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34182ed3-c23c-4025-8da9-c7feb31e0681 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.343474490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731376343440339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34182ed3-c23c-4025-8da9-c7feb31e0681 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.344147882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b7d8d63-f64c-4878-a351-4482f39f9606 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.344234914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b7d8d63-f64c-4878-a351-4482f39f9606 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:29:36 no-preload-118016 crio[723]: time="2024-08-04 00:29:36.344507466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f59b89753e4a597652b4a53f13a681bada0e2629949903713f9344c0c937af6,PodSandboxId:c6f8edd0330f76cc19d763ba486d668768da75a8d36dd24246448b4cc0535cd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730826747654685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fdb5fa-a2e9-4d3d-8149-25720c320d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c65ae645171e407b2813910f0e54146ab1831f0f9130a3a26eec8eaa4ca14f,PodSandboxId:09234c4c7f59230ce583f38b939f8686dbfed08d095901f1150c18bc7fc80621,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826313016176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gg97s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bfbbe9-5051-4674-8b43-f07bfdbc6916,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28455521ad2095446bf081c4170d00e6ccec27f14f667f77e7b788fc2c51c6d2,PodSandboxId:91c2813710adc1ab2d52544e83a4889b923fd40b7448f6fc6a7b6a03b5e9de75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826176482368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lj494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74
baae1c-e4c4-4125-aa9d-aeaac74a6ecd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f25ada05becf94b2699f197afd4b70de6b3211b94217bca2f9b51c476e439b,PodSandboxId:5a3f3a2b20c1b39eb6e6177730d17de94c11462651f8b4da1e43c114f3c79bd6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722730825465105468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4jqng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c254599f-e58d-4d0a-81c9-1c98c0341f26,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9d8868414e3c960bb075bcd99e870e5828071367bb340a06e1dd084313253e,PodSandboxId:a888bbecc6e161ad4f5b5de5ec1dcd8e118d9a0a993576132ee45678a7c0bca6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722730814777255937,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc454595d6fcef8957e46922bb70e36,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea380b4ed6c57516e60c5a8b184a4c25ab0858bcb97c92879ae7acc4bbb3a438,PodSandboxId:7cdbbde7b11a7ef3dff5f02ed5b0c8db247f7ef90f8d8d7af2d4a8470334e28a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722730814708925486,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da969ee0e5a26bedb12514b48871ff47a91af2aacfe243d16021051a9fb1ae8a,PodSandboxId:fbdb61c5a4c0448f4d6eedd48644d25a1adc35f919d5ff446d45af8860473314,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722730814630099645,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd018487a3261c4ff0832bfaea361607,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc100fbc7b932603c43e7d427ec410b9665bb63a0ae251fe56cc1d0233bf678,PodSandboxId:f879716552436bdf74db0a2e7aae72ef3c68ae4cd823112e19dcdd05fdb3bd0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722730814546681246,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2457feb0f26f34eb0d94a1a245199e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b8d7537c15e948e9393b9c0ae8eec10ec494dce9c1d9a41a5e3a904c7c0d8f,PodSandboxId:153678d85c5371837fc6f46f100b86ac02da29e1a92d04af5517e9a4b209245c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722730526193836406,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b7d8d63-f64c-4878-a351-4482f39f9606 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5f59b89753e4a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   c6f8edd0330f7       storage-provisioner
	12c65ae645171       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   09234c4c7f592       coredns-6f6b679f8f-gg97s
	28455521ad209       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   91c2813710adc       coredns-6f6b679f8f-lj494
	91f25ada05bec       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   9 minutes ago       Running             kube-proxy                0                   5a3f3a2b20c1b       kube-proxy-4jqng
	0f9d8868414e3       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   9 minutes ago       Running             kube-scheduler            2                   a888bbecc6e16       kube-scheduler-no-preload-118016
	ea380b4ed6c57       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   9 minutes ago       Running             kube-apiserver            2                   7cdbbde7b11a7       kube-apiserver-no-preload-118016
	da969ee0e5a26       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   9 minutes ago       Running             kube-controller-manager   2                   fbdb61c5a4c04       kube-controller-manager-no-preload-118016
	4bc100fbc7b93       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   f879716552436       etcd-no-preload-118016
	65b8d7537c15e       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   14 minutes ago      Exited              kube-apiserver            1                   153678d85c537       kube-apiserver-no-preload-118016
	
	
	==> coredns [12c65ae645171e407b2813910f0e54146ab1831f0f9130a3a26eec8eaa4ca14f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [28455521ad2095446bf081c4170d00e6ccec27f14f667f77e7b788fc2c51c6d2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-118016
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-118016
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=no-preload-118016
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T00_20_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:20:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-118016
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:29:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:25:35 +0000   Sun, 04 Aug 2024 00:20:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:25:35 +0000   Sun, 04 Aug 2024 00:20:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:25:35 +0000   Sun, 04 Aug 2024 00:20:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:25:35 +0000   Sun, 04 Aug 2024 00:20:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.137
	  Hostname:    no-preload-118016
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 929c75e006db4c36bd710fce742d71c7
	  System UUID:                929c75e0-06db-4c36-bd71-0fce742d71c7
	  Boot ID:                    dfbd9c45-cd25-4f16-b177-f333581a83d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-gg97s                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m11s
	  kube-system                 coredns-6f6b679f8f-lj494                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m11s
	  kube-system                 etcd-no-preload-118016                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-apiserver-no-preload-118016             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-no-preload-118016    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-proxy-4jqng                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	  kube-system                 kube-scheduler-no-preload-118016             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-6867b74b74-9gw27              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m10s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m9s   kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node no-preload-118016 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node no-preload-118016 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node no-preload-118016 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s  node-controller  Node no-preload-118016 event: Registered Node no-preload-118016 in Controller
	
	
	==> dmesg <==
	[  +0.043490] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.918981] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.533088] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.556998] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 4 00:15] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.061011] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068639] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.179678] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.153517] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.605464] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[ +16.571741] systemd-fstab-generator[1249]: Ignoring "noauto" option for root device
	[  +0.062683] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.971582] systemd-fstab-generator[1371]: Ignoring "noauto" option for root device
	[  +3.314167] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.530260] kauditd_printk_skb: 53 callbacks suppressed
	[  +9.865756] kauditd_printk_skb: 30 callbacks suppressed
	[Aug 4 00:20] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.699624] systemd-fstab-generator[3021]: Ignoring "noauto" option for root device
	[  +6.068622] systemd-fstab-generator[3342]: Ignoring "noauto" option for root device
	[  +0.114342] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.289085] systemd-fstab-generator[3462]: Ignoring "noauto" option for root device
	[  +0.111627] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.674193] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [4bc100fbc7b932603c43e7d427ec410b9665bb63a0ae251fe56cc1d0233bf678] <==
	{"level":"info","ts":"2024-08-04T00:20:14.918112Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T00:20:14.918336Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"cd68190d43a88764","initial-advertise-peer-urls":["https://192.168.61.137:2380"],"listen-peer-urls":["https://192.168.61.137:2380"],"advertise-client-urls":["https://192.168.61.137:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.137:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:20:14.918378Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T00:20:14.918439Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.137:2380"}
	{"level":"info","ts":"2024-08-04T00:20:14.918470Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.137:2380"}
	{"level":"info","ts":"2024-08-04T00:20:15.061841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd68190d43a88764 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-04T00:20:15.061909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd68190d43a88764 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-04T00:20:15.061944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd68190d43a88764 received MsgPreVoteResp from cd68190d43a88764 at term 1"}
	{"level":"info","ts":"2024-08-04T00:20:15.061972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd68190d43a88764 became candidate at term 2"}
	{"level":"info","ts":"2024-08-04T00:20:15.061981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd68190d43a88764 received MsgVoteResp from cd68190d43a88764 at term 2"}
	{"level":"info","ts":"2024-08-04T00:20:15.061993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cd68190d43a88764 became leader at term 2"}
	{"level":"info","ts":"2024-08-04T00:20:15.062032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cd68190d43a88764 elected leader cd68190d43a88764 at term 2"}
	{"level":"info","ts":"2024-08-04T00:20:15.066005Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:20:15.070106Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"cd68190d43a88764","local-member-attributes":"{Name:no-preload-118016 ClientURLs:[https://192.168.61.137:2379]}","request-path":"/0/members/cd68190d43a88764/attributes","cluster-id":"c81a097889804662","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:20:15.070313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:20:15.070833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:20:15.071012Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:20:15.071041Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:20:15.071643Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T00:20:15.079442Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.137:2379"}
	{"level":"info","ts":"2024-08-04T00:20:15.082407Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T00:20:15.086998Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:20:15.087241Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c81a097889804662","local-member-id":"cd68190d43a88764","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:20:15.090964Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:20:15.103001Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 00:29:36 up 14 min,  0 users,  load average: 0.18, 0.24, 0.18
	Linux no-preload-118016 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [65b8d7537c15e948e9393b9c0ae8eec10ec494dce9c1d9a41a5e3a904c7c0d8f] <==
	W0804 00:20:06.537155       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.552199       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.615266       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.634256       1 logging.go:55] [core] [Channel #19 SubChannel #20]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.652094       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.666215       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.696822       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.713599       1 logging.go:55] [core] [Channel #43 SubChannel #44]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.741499       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.751410       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.753027       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.819459       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.833380       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.911984       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:07.000837       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:07.064590       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:10.441908       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:10.687048       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:10.795681       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:11.047547       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:11.069102       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:11.069193       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:11.282375       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:11.337518       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:11.386883       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ea380b4ed6c57516e60c5a8b184a4c25ab0858bcb97c92879ae7acc4bbb3a438] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0804 00:25:18.268928       1 handler_proxy.go:99] no RequestInfo found in the context
	E0804 00:25:18.269102       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0804 00:25:18.270373       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0804 00:25:18.270443       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:26:18.271238       1 handler_proxy.go:99] no RequestInfo found in the context
	E0804 00:26:18.271312       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0804 00:26:18.271238       1 handler_proxy.go:99] no RequestInfo found in the context
	E0804 00:26:18.271421       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0804 00:26:18.272701       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0804 00:26:18.272801       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:28:18.273135       1 handler_proxy.go:99] no RequestInfo found in the context
	E0804 00:28:18.273452       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0804 00:28:18.273542       1 handler_proxy.go:99] no RequestInfo found in the context
	E0804 00:28:18.273630       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0804 00:28:18.274818       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0804 00:28:18.274900       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [da969ee0e5a26bedb12514b48871ff47a91af2aacfe243d16021051a9fb1ae8a] <==
	E0804 00:24:24.229008       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:24:24.673530       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:24:54.236158       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:24:54.681477       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:25:24.242360       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:25:24.691504       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:25:35.574930       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-118016"
	E0804 00:25:54.248542       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:25:54.699062       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:26:13.789932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="1.216349ms"
	E0804 00:26:24.255162       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:26:24.711026       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:26:27.786385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="91.263µs"
	E0804 00:26:54.261306       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:26:54.720499       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:27:24.269011       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:27:24.730470       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:27:54.276678       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:27:54.738676       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:28:24.283922       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:28:24.747853       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:28:54.290851       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:28:54.758086       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:29:24.298473       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:29:24.766445       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [91f25ada05becf94b2699f197afd4b70de6b3211b94217bca2f9b51c476e439b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0804 00:20:26.056858       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0804 00:20:26.180473       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.137"]
	E0804 00:20:26.184657       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0804 00:20:26.695417       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0804 00:20:26.695481       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:20:26.695511       1 server_linux.go:169] "Using iptables Proxier"
	I0804 00:20:26.700424       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0804 00:20:26.700647       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0804 00:20:26.700674       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:20:26.704112       1 config.go:197] "Starting service config controller"
	I0804 00:20:26.704155       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:20:26.704175       1 config.go:104] "Starting endpoint slice config controller"
	I0804 00:20:26.704178       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:20:26.705323       1 config.go:326] "Starting node config controller"
	I0804 00:20:26.705348       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:20:26.805014       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:20:26.805102       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:20:26.806026       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f9d8868414e3c960bb075bcd99e870e5828071367bb340a06e1dd084313253e] <==
	W0804 00:20:17.263856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 00:20:17.263931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.090208       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0804 00:20:18.090265       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0804 00:20:18.099179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0804 00:20:18.099269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.099180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0804 00:20:18.099372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.200583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0804 00:20:18.200631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.217759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0804 00:20:18.217849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.217877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0804 00:20:18.218098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.220133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0804 00:20:18.220200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.462573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 00:20:18.462707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.507200       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0804 00:20:18.507633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.537044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0804 00:20:18.537925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.585560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 00:20:18.585689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0804 00:20:21.034793       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 04 00:28:29 no-preload-118016 kubelet[3348]: E0804 00:28:29.908962    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731309908559331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:28:29 no-preload-118016 kubelet[3348]: E0804 00:28:29.909222    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731309908559331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:28:32 no-preload-118016 kubelet[3348]: E0804 00:28:32.770109    3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9gw27" podUID="2f3cdf21-9e68-49b9-a6e0-927465738f23"
	Aug 04 00:28:39 no-preload-118016 kubelet[3348]: E0804 00:28:39.910990    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731319910598795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:28:39 no-preload-118016 kubelet[3348]: E0804 00:28:39.911371    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731319910598795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:28:47 no-preload-118016 kubelet[3348]: E0804 00:28:47.772893    3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9gw27" podUID="2f3cdf21-9e68-49b9-a6e0-927465738f23"
	Aug 04 00:28:49 no-preload-118016 kubelet[3348]: E0804 00:28:49.913992    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731329913558133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:28:49 no-preload-118016 kubelet[3348]: E0804 00:28:49.914020    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731329913558133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:28:59 no-preload-118016 kubelet[3348]: E0804 00:28:59.916393    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731339915938364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:28:59 no-preload-118016 kubelet[3348]: E0804 00:28:59.916426    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731339915938364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:29:00 no-preload-118016 kubelet[3348]: E0804 00:29:00.770332    3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9gw27" podUID="2f3cdf21-9e68-49b9-a6e0-927465738f23"
	Aug 04 00:29:09 no-preload-118016 kubelet[3348]: E0804 00:29:09.919687    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731349919226558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:29:09 no-preload-118016 kubelet[3348]: E0804 00:29:09.920108    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731349919226558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:29:11 no-preload-118016 kubelet[3348]: E0804 00:29:11.771361    3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9gw27" podUID="2f3cdf21-9e68-49b9-a6e0-927465738f23"
	Aug 04 00:29:19 no-preload-118016 kubelet[3348]: E0804 00:29:19.827424    3348 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:29:19 no-preload-118016 kubelet[3348]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:29:19 no-preload-118016 kubelet[3348]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:29:19 no-preload-118016 kubelet[3348]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:29:19 no-preload-118016 kubelet[3348]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:29:19 no-preload-118016 kubelet[3348]: E0804 00:29:19.922396    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731359921963751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:29:19 no-preload-118016 kubelet[3348]: E0804 00:29:19.922438    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731359921963751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:29:23 no-preload-118016 kubelet[3348]: E0804 00:29:23.770948    3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9gw27" podUID="2f3cdf21-9e68-49b9-a6e0-927465738f23"
	Aug 04 00:29:29 no-preload-118016 kubelet[3348]: E0804 00:29:29.924938    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731369924393943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:29:29 no-preload-118016 kubelet[3348]: E0804 00:29:29.925447    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731369924393943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:29:35 no-preload-118016 kubelet[3348]: E0804 00:29:35.771998    3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9gw27" podUID="2f3cdf21-9e68-49b9-a6e0-927465738f23"
	
	
	==> storage-provisioner [5f59b89753e4a597652b4a53f13a681bada0e2629949903713f9344c0c937af6] <==
	I0804 00:20:27.010470       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0804 00:20:27.042522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0804 00:20:27.042614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0804 00:20:27.056619       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0804 00:20:27.063023       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-118016_ba3a8652-7281-4978-b562-91d934499239!
	I0804 00:20:27.057226       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e720b089-28e8-4857-ac6f-14ff33c60ece", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-118016_ba3a8652-7281-4978-b562-91d934499239 became leader
	I0804 00:20:27.164254       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-118016_ba3a8652-7281-4978-b562-91d934499239!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-118016 -n no-preload-118016
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-118016 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-9gw27
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-118016 describe pod metrics-server-6867b74b74-9gw27
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-118016 describe pod metrics-server-6867b74b74-9gw27: exit status 1 (64.230167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-9gw27" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-118016 describe pod metrics-server-6867b74b74-9gw27: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
E0804 00:23:27.615956   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
E0804 00:25:58.007650   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
E0804 00:26:30.667542   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
E0804 00:28:27.616714   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
E0804 00:30:58.007879   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-576210 -n old-k8s-version-576210
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 2 (223.327288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-576210" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 2 (217.900514ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-576210 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-576210 logs -n 25: (1.778933487s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-551054                                 | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302198                           | kubernetes-upgrade-302198    | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-551054 sudo                            | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-551054                                 | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-877598            | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-705918                              | cert-expiration-705918       | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-423330 | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	|         | disable-driver-mounts-423330                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:09 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-118016             | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC | 04 Aug 24 00:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-576210        | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-969068  | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-877598                 | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-576210             | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-118016                  | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-969068       | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:11:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:11:52.361065   65441 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:11:52.361334   65441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:11:52.361345   65441 out.go:304] Setting ErrFile to fd 2...
	I0804 00:11:52.361349   65441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:11:52.361548   65441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:11:52.362087   65441 out.go:298] Setting JSON to false
	I0804 00:11:52.363002   65441 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6856,"bootTime":1722723456,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:11:52.363061   65441 start.go:139] virtualization: kvm guest
	I0804 00:11:52.365345   65441 out.go:177] * [default-k8s-diff-port-969068] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:11:52.367170   65441 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:11:52.367161   65441 notify.go:220] Checking for updates...
	I0804 00:11:52.369837   65441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:11:52.371134   65441 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:11:52.372226   65441 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:11:52.373445   65441 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:11:52.374802   65441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:11:52.376375   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:11:52.376787   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:11:52.376859   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:11:52.392495   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0804 00:11:52.392954   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:11:52.393477   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:11:52.393497   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:11:52.393883   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:11:52.394048   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:11:52.394313   65441 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:11:52.394606   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:11:52.394638   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:11:52.409194   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0804 00:11:52.409594   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:11:52.410032   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:11:52.410050   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:11:52.410358   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:11:52.410529   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:11:52.445480   65441 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:11:52.446679   65441 start.go:297] selected driver: kvm2
	I0804 00:11:52.446694   65441 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:11:52.446827   65441 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:11:52.447792   65441 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:11:52.447886   65441 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:11:52.462893   65441 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:11:52.463275   65441 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:11:52.463306   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:11:52.463316   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:11:52.463368   65441 start.go:340] cluster config:
	{Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:11:52.463486   65441 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:11:52.465374   65441 out.go:177] * Starting "default-k8s-diff-port-969068" primary control-plane node in "default-k8s-diff-port-969068" cluster
	I0804 00:11:52.466656   65441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:11:52.466698   65441 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:11:52.466710   65441 cache.go:56] Caching tarball of preloaded images
	I0804 00:11:52.466790   65441 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:11:52.466801   65441 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:11:52.466901   65441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:11:52.467100   65441 start.go:360] acquireMachinesLock for default-k8s-diff-port-969068: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:11:55.809602   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:11:58.881666   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:04.961665   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:08.033617   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:14.113634   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:17.185623   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:23.265618   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:26.337594   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:32.417583   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:35.489705   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:41.569654   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:44.641653   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:50.721640   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:53.793649   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:59.873643   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:02.945676   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:09.025652   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:12.097647   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:18.177740   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:21.249606   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:27.329637   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:30.401648   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:36.481588   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:39.553638   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:45.633633   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:48.705646   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:54.785636   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:57.857662   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:03.937643   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:07.009557   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:13.089694   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:16.161619   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:22.241650   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:25.313612   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:28.318586   64758 start.go:364] duration metric: took 4m16.324186239s to acquireMachinesLock for "old-k8s-version-576210"
	I0804 00:14:28.318635   64758 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:28.318646   64758 fix.go:54] fixHost starting: 
	I0804 00:14:28.319092   64758 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:28.319128   64758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:28.334850   64758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0804 00:14:28.335321   64758 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:28.335817   64758 main.go:141] libmachine: Using API Version  1
	I0804 00:14:28.335848   64758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:28.336204   64758 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:28.336435   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:28.336622   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetState
	I0804 00:14:28.338146   64758 fix.go:112] recreateIfNeeded on old-k8s-version-576210: state=Stopped err=<nil>
	I0804 00:14:28.338166   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	W0804 00:14:28.338322   64758 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:28.340640   64758 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-576210" ...
	I0804 00:14:28.315605   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:28.315642   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:14:28.316035   64502 buildroot.go:166] provisioning hostname "embed-certs-877598"
	I0804 00:14:28.316073   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:14:28.316325   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:14:28.318440   64502 machine.go:97] duration metric: took 4m37.42620041s to provisionDockerMachine
	I0804 00:14:28.318477   64502 fix.go:56] duration metric: took 4m37.448052873s for fixHost
	I0804 00:14:28.318485   64502 start.go:83] releasing machines lock for "embed-certs-877598", held for 4m37.44807127s
	W0804 00:14:28.318509   64502 start.go:714] error starting host: provision: host is not running
	W0804 00:14:28.318594   64502 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0804 00:14:28.318606   64502 start.go:729] Will try again in 5 seconds ...
	I0804 00:14:28.342217   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .Start
	I0804 00:14:28.342401   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring networks are active...
	I0804 00:14:28.343274   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network default is active
	I0804 00:14:28.343761   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network mk-old-k8s-version-576210 is active
	I0804 00:14:28.344268   64758 main.go:141] libmachine: (old-k8s-version-576210) Getting domain xml...
	I0804 00:14:28.345080   64758 main.go:141] libmachine: (old-k8s-version-576210) Creating domain...
	I0804 00:14:29.575420   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting to get IP...
	I0804 00:14:29.576307   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.576754   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.576842   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.576711   66003 retry.go:31] will retry after 272.821874ms: waiting for machine to come up
	I0804 00:14:29.851363   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.851951   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.851976   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.851895   66003 retry.go:31] will retry after 247.116514ms: waiting for machine to come up
	I0804 00:14:30.100479   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.100883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.100916   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.100833   66003 retry.go:31] will retry after 353.251065ms: waiting for machine to come up
	I0804 00:14:30.455526   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.455975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.456004   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.455933   66003 retry.go:31] will retry after 558.071575ms: waiting for machine to come up
	I0804 00:14:31.015539   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.015974   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.016000   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.015917   66003 retry.go:31] will retry after 514.757536ms: waiting for machine to come up
	I0804 00:14:31.532799   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.533232   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.533250   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.533186   66003 retry.go:31] will retry after 607.548546ms: waiting for machine to come up
	I0804 00:14:33.318807   64502 start.go:360] acquireMachinesLock for embed-certs-877598: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:14:32.142162   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:32.142658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:32.142693   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:32.142610   66003 retry.go:31] will retry after 897.977595ms: waiting for machine to come up
	I0804 00:14:33.042628   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:33.043002   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:33.043028   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:33.042966   66003 retry.go:31] will retry after 1.094117762s: waiting for machine to come up
	I0804 00:14:34.138946   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:34.139459   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:34.139485   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:34.139414   66003 retry.go:31] will retry after 1.435055372s: waiting for machine to come up
	I0804 00:14:35.576253   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:35.576603   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:35.576625   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:35.576547   66003 retry.go:31] will retry after 1.688006591s: waiting for machine to come up
	I0804 00:14:37.265928   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:37.266429   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:37.266456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:37.266371   66003 retry.go:31] will retry after 2.356818801s: waiting for machine to come up
	I0804 00:14:39.624408   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:39.624832   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:39.624863   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:39.624775   66003 retry.go:31] will retry after 2.41856098s: waiting for machine to come up
	I0804 00:14:46.442402   65087 start.go:364] duration metric: took 3m44.405576801s to acquireMachinesLock for "no-preload-118016"
	I0804 00:14:46.442459   65087 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:46.442469   65087 fix.go:54] fixHost starting: 
	I0804 00:14:46.442938   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:46.442975   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:46.459944   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0804 00:14:46.460375   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:46.460851   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:14:46.460871   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:46.461211   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:46.461402   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:14:46.461538   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:14:46.463097   65087 fix.go:112] recreateIfNeeded on no-preload-118016: state=Stopped err=<nil>
	I0804 00:14:46.463126   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	W0804 00:14:46.463282   65087 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:46.465711   65087 out.go:177] * Restarting existing kvm2 VM for "no-preload-118016" ...
	I0804 00:14:42.044498   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:42.044855   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:42.044882   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:42.044822   66003 retry.go:31] will retry after 3.111190148s: waiting for machine to come up
	I0804 00:14:45.158161   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.158688   64758 main.go:141] libmachine: (old-k8s-version-576210) Found IP for machine: 192.168.72.154
	I0804 00:14:45.158709   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserving static IP address...
	I0804 00:14:45.158719   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has current primary IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.159112   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.159138   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | skip adding static IP to network mk-old-k8s-version-576210 - found existing host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"}
	I0804 00:14:45.159151   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserved static IP address: 192.168.72.154
	I0804 00:14:45.159163   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting for SSH to be available...
	I0804 00:14:45.159172   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Getting to WaitForSSH function...
	I0804 00:14:45.161469   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161782   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.161812   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161936   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH client type: external
	I0804 00:14:45.161975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa (-rw-------)
	I0804 00:14:45.162015   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:14:45.162034   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | About to run SSH command:
	I0804 00:14:45.162044   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | exit 0
	I0804 00:14:45.281546   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | SSH cmd err, output: <nil>: 
	I0804 00:14:45.281859   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetConfigRaw
	I0804 00:14:45.282574   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.284998   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285386   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.285414   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285614   64758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/config.json ...
	I0804 00:14:45.285806   64758 machine.go:94] provisionDockerMachine start ...
	I0804 00:14:45.285823   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:45.286098   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.288285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288640   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.288668   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288753   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.288931   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289088   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289253   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.289426   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.289628   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.289640   64758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:14:45.386001   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:14:45.386036   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386325   64758 buildroot.go:166] provisioning hostname "old-k8s-version-576210"
	I0804 00:14:45.386348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386536   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.389316   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389718   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.389739   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389948   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.390122   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390285   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390415   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.390557   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.390758   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.390776   64758 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-576210 && echo "old-k8s-version-576210" | sudo tee /etc/hostname
	I0804 00:14:45.499644   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-576210
	
	I0804 00:14:45.499695   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.502583   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.502935   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.502959   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.503123   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.503318   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503456   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503570   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.503729   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.503898   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.503915   64758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-576210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-576210/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-576210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:14:45.606971   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:45.607003   64758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:14:45.607045   64758 buildroot.go:174] setting up certificates
	I0804 00:14:45.607053   64758 provision.go:84] configureAuth start
	I0804 00:14:45.607062   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.607327   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.610009   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610378   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.610407   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610545   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.612549   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.612876   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.612908   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.613071   64758 provision.go:143] copyHostCerts
	I0804 00:14:45.613134   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:14:45.613147   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:14:45.613231   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:14:45.613343   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:14:45.613368   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:14:45.613410   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:14:45.613491   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:14:45.613501   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:14:45.613535   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:14:45.613609   64758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-576210 san=[127.0.0.1 192.168.72.154 localhost minikube old-k8s-version-576210]
	I0804 00:14:45.794221   64758 provision.go:177] copyRemoteCerts
	I0804 00:14:45.794276   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:14:45.794299   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.796859   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797182   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.797225   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.797555   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.797687   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.797804   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:45.875704   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:14:45.903765   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0804 00:14:45.930101   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:14:45.955639   64758 provision.go:87] duration metric: took 348.556108ms to configureAuth
	I0804 00:14:45.955668   64758 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:14:45.955874   64758 config.go:182] Loaded profile config "old-k8s-version-576210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:14:45.955960   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.958487   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958835   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.958950   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958970   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.959193   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.959616   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.959789   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.959810   64758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:14:46.217683   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:14:46.217725   64758 machine.go:97] duration metric: took 931.901933ms to provisionDockerMachine
	I0804 00:14:46.217742   64758 start.go:293] postStartSetup for "old-k8s-version-576210" (driver="kvm2")
	I0804 00:14:46.217758   64758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:14:46.217787   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.218127   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:14:46.218151   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.220834   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221148   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.221170   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221342   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.221576   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.221733   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.221867   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.300102   64758 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:14:46.304434   64758 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:14:46.304464   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:14:46.304538   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:14:46.304631   64758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:14:46.304747   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:14:46.314378   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:46.339057   64758 start.go:296] duration metric: took 121.299069ms for postStartSetup
	I0804 00:14:46.339105   64758 fix.go:56] duration metric: took 18.020458894s for fixHost
	I0804 00:14:46.339129   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.341883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342258   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.342285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.342688   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342856   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342992   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.343161   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:46.343385   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:46.343400   64758 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:14:46.442247   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730486.414818212
	
	I0804 00:14:46.442275   64758 fix.go:216] guest clock: 1722730486.414818212
	I0804 00:14:46.442288   64758 fix.go:229] Guest: 2024-08-04 00:14:46.414818212 +0000 UTC Remote: 2024-08-04 00:14:46.339109981 +0000 UTC m=+274.490542023 (delta=75.708231ms)
	I0804 00:14:46.442313   64758 fix.go:200] guest clock delta is within tolerance: 75.708231ms
	I0804 00:14:46.442319   64758 start.go:83] releasing machines lock for "old-k8s-version-576210", held for 18.123699316s
	I0804 00:14:46.442347   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.442656   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:46.445456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.445865   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.445892   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.446069   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446577   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446743   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446816   64758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:14:46.446850   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.446965   64758 ssh_runner.go:195] Run: cat /version.json
	I0804 00:14:46.446987   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.449576   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449794   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449953   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.449983   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450178   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450265   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.450317   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450384   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450520   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450605   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450667   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450733   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.450780   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450910   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.534686   64758 ssh_runner.go:195] Run: systemctl --version
	I0804 00:14:46.554270   64758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:14:46.708220   64758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:14:46.714541   64758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:14:46.714607   64758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:14:46.731642   64758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:14:46.731668   64758 start.go:495] detecting cgroup driver to use...
	I0804 00:14:46.731739   64758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:14:46.748782   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:14:46.763556   64758 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:14:46.763640   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:14:46.778075   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:14:46.793133   64758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:14:46.466927   65087 main.go:141] libmachine: (no-preload-118016) Calling .Start
	I0804 00:14:46.467081   65087 main.go:141] libmachine: (no-preload-118016) Ensuring networks are active...
	I0804 00:14:46.467696   65087 main.go:141] libmachine: (no-preload-118016) Ensuring network default is active
	I0804 00:14:46.468023   65087 main.go:141] libmachine: (no-preload-118016) Ensuring network mk-no-preload-118016 is active
	I0804 00:14:46.468344   65087 main.go:141] libmachine: (no-preload-118016) Getting domain xml...
	I0804 00:14:46.468932   65087 main.go:141] libmachine: (no-preload-118016) Creating domain...
	I0804 00:14:46.918377   64758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:14:47.059683   64758 docker.go:233] disabling docker service ...
	I0804 00:14:47.059753   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:14:47.074819   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:14:47.092184   64758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:14:47.235274   64758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:14:47.357937   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:14:47.375273   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:14:47.395182   64758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0804 00:14:47.395236   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.407036   64758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:14:47.407092   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.418562   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.434481   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.447488   64758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:14:47.460242   64758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:14:47.471089   64758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:14:47.471143   64758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:14:47.486698   64758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:14:47.498754   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:47.630867   64758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:14:47.796598   64758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:14:47.796690   64758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:14:47.802302   64758 start.go:563] Will wait 60s for crictl version
	I0804 00:14:47.802364   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:47.806368   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:14:47.847588   64758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:14:47.847679   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.877936   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.908229   64758 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0804 00:14:47.909635   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:47.912658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913102   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:47.913130   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913438   64758 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0804 00:14:47.917910   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:47.931201   64758 kubeadm.go:883] updating cluster {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:14:47.931318   64758 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:14:47.931381   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:47.980001   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:47.980071   64758 ssh_runner.go:195] Run: which lz4
	I0804 00:14:47.984277   64758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:14:47.988781   64758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:14:47.988810   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0804 00:14:49.706968   64758 crio.go:462] duration metric: took 1.722721175s to copy over tarball
	I0804 00:14:49.707059   64758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:14:47.715321   65087 main.go:141] libmachine: (no-preload-118016) Waiting to get IP...
	I0804 00:14:47.716397   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:47.716853   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:47.716889   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:47.716820   66120 retry.go:31] will retry after 187.841432ms: waiting for machine to come up
	I0804 00:14:47.906481   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:47.906984   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:47.907018   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:47.906942   66120 retry.go:31] will retry after 389.569097ms: waiting for machine to come up
	I0804 00:14:48.298691   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:48.299997   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:48.300021   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:48.299947   66120 retry.go:31] will retry after 382.905254ms: waiting for machine to come up
	I0804 00:14:48.684628   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:48.685095   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:48.685127   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:48.685066   66120 retry.go:31] will retry after 526.267085ms: waiting for machine to come up
	I0804 00:14:49.213459   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:49.214180   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:49.214203   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:49.214142   66120 retry.go:31] will retry after 666.253139ms: waiting for machine to come up
	I0804 00:14:49.882141   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:49.882610   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:49.882639   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:49.882560   66120 retry.go:31] will retry after 776.560525ms: waiting for machine to come up
	I0804 00:14:50.660679   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:50.661149   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:50.661177   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:50.661105   66120 retry.go:31] will retry after 825.927722ms: waiting for machine to come up
	I0804 00:14:51.488562   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:51.488937   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:51.488964   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:51.488894   66120 retry.go:31] will retry after 1.210535859s: waiting for machine to come up
	I0804 00:14:52.511242   64758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.804147671s)
	I0804 00:14:52.511275   64758 crio.go:469] duration metric: took 2.804279705s to extract the tarball
	I0804 00:14:52.511285   64758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:14:52.553905   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:52.587405   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:52.587429   64758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:14:52.587496   64758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.587513   64758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.587550   64758 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.587551   64758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.587554   64758 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.587567   64758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.587570   64758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.587577   64758 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.589240   64758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.589239   64758 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.589247   64758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.589211   64758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.589287   64758 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589579   64758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.742969   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.766505   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.782813   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0804 00:14:52.788509   64758 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0804 00:14:52.788553   64758 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.788598   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.823108   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.829531   64758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0804 00:14:52.829577   64758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.829648   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.858209   64758 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0804 00:14:52.858238   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.858245   64758 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0804 00:14:52.858288   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.888665   64758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0804 00:14:52.888717   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.888748   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0804 00:14:52.888717   64758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.888794   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.918127   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.921386   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0804 00:14:52.929839   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.977866   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0804 00:14:52.977919   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.977960   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0804 00:14:52.994379   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.003198   64758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0804 00:14:53.003233   64758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.003273   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.056310   64758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0804 00:14:53.056338   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0804 00:14:53.056357   64758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.056403   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.062077   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.062119   64758 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0804 00:14:53.062161   64758 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.062206   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.064260   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.114709   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0804 00:14:53.114758   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.118375   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0804 00:14:53.147635   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0804 00:14:53.497155   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:53.647242   64758 cache_images.go:92] duration metric: took 1.059794593s to LoadCachedImages
	W0804 00:14:53.647353   64758 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0804 00:14:53.647370   64758 kubeadm.go:934] updating node { 192.168.72.154 8443 v1.20.0 crio true true} ...
	I0804 00:14:53.647507   64758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-576210 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:14:53.647586   64758 ssh_runner.go:195] Run: crio config
	I0804 00:14:53.710377   64758 cni.go:84] Creating CNI manager for ""
	I0804 00:14:53.710399   64758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:14:53.710411   64758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:14:53.710437   64758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.154 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-576210 NodeName:old-k8s-version-576210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0804 00:14:53.710583   64758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-576210"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:14:53.710661   64758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0804 00:14:53.721942   64758 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:14:53.722005   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:14:53.732623   64758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0804 00:14:53.749878   64758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:14:53.767147   64758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0804 00:14:53.785522   64758 ssh_runner.go:195] Run: grep 192.168.72.154	control-plane.minikube.internal$ /etc/hosts
	I0804 00:14:53.789438   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:53.802152   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:53.934508   64758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:14:53.952247   64758 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210 for IP: 192.168.72.154
	I0804 00:14:53.952280   64758 certs.go:194] generating shared ca certs ...
	I0804 00:14:53.952301   64758 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:53.952470   64758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:14:53.952523   64758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:14:53.952536   64758 certs.go:256] generating profile certs ...
	I0804 00:14:53.952658   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.key
	I0804 00:14:53.952730   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key.5357f842
	I0804 00:14:53.952783   64758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key
	I0804 00:14:53.952948   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:14:53.953000   64758 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:14:53.953013   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:14:53.953048   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:14:53.953084   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:14:53.953114   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:14:53.953191   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:53.954013   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:14:54.001446   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:14:54.029628   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:14:54.062713   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:14:54.090711   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0804 00:14:54.117970   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:14:54.163691   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:14:54.190151   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:14:54.219334   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:14:54.244677   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:14:54.269795   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:14:54.294949   64758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:14:54.312330   64758 ssh_runner.go:195] Run: openssl version
	I0804 00:14:54.318320   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:14:54.328932   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333686   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333737   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.341330   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:14:54.356008   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:14:54.368966   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373896   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373954   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.379770   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:14:54.390903   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:14:54.402637   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407296   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407362   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.413215   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:14:54.424473   64758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:14:54.429673   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:14:54.436038   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:14:54.442091   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:14:54.448507   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:14:54.455421   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:14:54.461969   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:14:54.468042   64758 kubeadm.go:392] StartCluster: {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:14:54.468151   64758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:14:54.468208   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.508109   64758 cri.go:89] found id: ""
	I0804 00:14:54.508183   64758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:14:54.518712   64758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:14:54.518736   64758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:14:54.518788   64758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:14:54.528545   64758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:14:54.529780   64758 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-576210" does not appear in /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:14:54.530411   64758 kubeconfig.go:62] /home/jenkins/minikube-integration/19364-9607/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-576210" cluster setting kubeconfig missing "old-k8s-version-576210" context setting]
	I0804 00:14:54.531316   64758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:54.550431   64758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:14:54.561047   64758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.154
	I0804 00:14:54.561086   64758 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:14:54.561108   64758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:14:54.561163   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.597213   64758 cri.go:89] found id: ""
	I0804 00:14:54.597282   64758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:14:54.612914   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:14:54.622533   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:14:54.622562   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:14:54.622613   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:14:54.632746   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:14:54.632812   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:14:54.642197   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:14:54.651204   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:14:54.651268   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:14:54.660496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.669448   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:14:54.669512   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.678773   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:14:54.687854   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:14:54.687902   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:14:54.697066   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:14:54.707036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:54.840553   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.551919   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.790500   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.898210   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.995621   64758 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:14:55.995711   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:56.496072   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:52.701200   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:52.701574   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:52.701598   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:52.701547   66120 retry.go:31] will retry after 1.518623613s: waiting for machine to come up
	I0804 00:14:54.221367   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:54.221886   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:54.221916   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:54.221835   66120 retry.go:31] will retry after 1.869121058s: waiting for machine to come up
	I0804 00:14:56.092101   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:56.092527   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:56.092550   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:56.092488   66120 retry.go:31] will retry after 2.071227436s: waiting for machine to come up
	I0804 00:14:56.995965   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.496285   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.995805   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.496549   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.996224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.496360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.996056   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:01.496435   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.166383   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:58.166760   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:58.166807   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:58.166729   66120 retry.go:31] will retry after 2.352991709s: waiting for machine to come up
	I0804 00:15:00.522153   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:00.522630   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:15:00.522657   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:15:00.522584   66120 retry.go:31] will retry after 3.326179831s: waiting for machine to come up
	I0804 00:15:05.170439   65441 start.go:364] duration metric: took 3m12.703297591s to acquireMachinesLock for "default-k8s-diff-port-969068"
	I0804 00:15:05.170512   65441 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:15:05.170520   65441 fix.go:54] fixHost starting: 
	I0804 00:15:05.170935   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:05.170974   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:05.188546   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0804 00:15:05.188997   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:05.189494   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:05.189518   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:05.189933   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:05.190132   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:05.190276   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:05.191653   65441 fix.go:112] recreateIfNeeded on default-k8s-diff-port-969068: state=Stopped err=<nil>
	I0804 00:15:05.191684   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	W0804 00:15:05.191834   65441 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:15:05.194275   65441 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-969068" ...
	I0804 00:15:01.996148   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.496756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.996430   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.496646   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.996707   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.496772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.995997   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.496651   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.996384   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.496403   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.850063   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.850518   65087 main.go:141] libmachine: (no-preload-118016) Found IP for machine: 192.168.61.137
	I0804 00:15:03.850544   65087 main.go:141] libmachine: (no-preload-118016) Reserving static IP address...
	I0804 00:15:03.850559   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has current primary IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.850970   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "no-preload-118016", mac: "52:54:00:be:41:20", ip: "192.168.61.137"} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.851001   65087 main.go:141] libmachine: (no-preload-118016) DBG | skip adding static IP to network mk-no-preload-118016 - found existing host DHCP lease matching {name: "no-preload-118016", mac: "52:54:00:be:41:20", ip: "192.168.61.137"}
	I0804 00:15:03.851015   65087 main.go:141] libmachine: (no-preload-118016) Reserved static IP address: 192.168.61.137
	I0804 00:15:03.851030   65087 main.go:141] libmachine: (no-preload-118016) Waiting for SSH to be available...
	I0804 00:15:03.851048   65087 main.go:141] libmachine: (no-preload-118016) DBG | Getting to WaitForSSH function...
	I0804 00:15:03.853316   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.853676   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.853705   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.853819   65087 main.go:141] libmachine: (no-preload-118016) DBG | Using SSH client type: external
	I0804 00:15:03.853850   65087 main.go:141] libmachine: (no-preload-118016) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa (-rw-------)
	I0804 00:15:03.853886   65087 main.go:141] libmachine: (no-preload-118016) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:03.853901   65087 main.go:141] libmachine: (no-preload-118016) DBG | About to run SSH command:
	I0804 00:15:03.853913   65087 main.go:141] libmachine: (no-preload-118016) DBG | exit 0
	I0804 00:15:03.981414   65087 main.go:141] libmachine: (no-preload-118016) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:03.981807   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetConfigRaw
	I0804 00:15:03.982419   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:03.985062   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.985400   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.985433   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.985674   65087 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/config.json ...
	I0804 00:15:03.985857   65087 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:03.985873   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:03.986090   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:03.988490   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.988798   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.988826   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.989017   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:03.989183   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:03.989342   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:03.989510   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:03.989697   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:03.989916   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:03.989927   65087 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:04.106042   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:04.106090   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.106372   65087 buildroot.go:166] provisioning hostname "no-preload-118016"
	I0804 00:15:04.106398   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.106594   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.109434   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.109777   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.109803   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.109919   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.110092   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.110248   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.110423   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.110582   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.110749   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.110764   65087 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-118016 && echo "no-preload-118016" | sudo tee /etc/hostname
	I0804 00:15:04.239856   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-118016
	
	I0804 00:15:04.239884   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.242877   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.243241   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.243271   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.243486   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.243712   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.243897   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.244046   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.244232   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.244420   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.244443   65087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-118016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-118016/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-118016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:04.367259   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:04.367289   65087 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:04.367330   65087 buildroot.go:174] setting up certificates
	I0804 00:15:04.367340   65087 provision.go:84] configureAuth start
	I0804 00:15:04.367432   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.367848   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:04.370330   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.370630   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.370658   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.370744   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.372799   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.373175   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.373203   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.373308   65087 provision.go:143] copyHostCerts
	I0804 00:15:04.373386   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:04.373399   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:04.373458   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:04.373557   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:04.373565   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:04.373585   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:04.373651   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:04.373657   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:04.373675   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:04.373732   65087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.no-preload-118016 san=[127.0.0.1 192.168.61.137 localhost minikube no-preload-118016]
	I0804 00:15:04.467261   65087 provision.go:177] copyRemoteCerts
	I0804 00:15:04.467322   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:04.467347   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.469843   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.470126   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.470154   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.470297   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.470478   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.470644   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.470761   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:04.559980   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:04.585701   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 00:15:04.610270   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:15:04.633954   65087 provision.go:87] duration metric: took 266.53536ms to configureAuth
	I0804 00:15:04.633981   65087 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:04.634154   65087 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:15:04.634219   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.636880   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.637243   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.637271   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.637452   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.637664   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.637823   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.637921   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.638060   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.638234   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.638250   65087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:04.916045   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:04.916077   65087 machine.go:97] duration metric: took 930.20802ms to provisionDockerMachine
	I0804 00:15:04.916088   65087 start.go:293] postStartSetup for "no-preload-118016" (driver="kvm2")
	I0804 00:15:04.916100   65087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:04.916113   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:04.916429   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:04.916453   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.919155   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.919485   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.919514   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.919657   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.919859   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.920026   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.920166   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.012754   65087 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:05.017004   65087 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:05.017024   65087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:05.017091   65087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:05.017180   65087 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:05.017293   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:05.026980   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:05.051265   65087 start.go:296] duration metric: took 135.164451ms for postStartSetup
	I0804 00:15:05.051309   65087 fix.go:56] duration metric: took 18.608839754s for fixHost
	I0804 00:15:05.051331   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.054286   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.054683   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.054710   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.054876   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.055127   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.055321   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.055485   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.055668   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:05.055870   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:05.055882   65087 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:05.170285   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730505.141206116
	
	I0804 00:15:05.170314   65087 fix.go:216] guest clock: 1722730505.141206116
	I0804 00:15:05.170321   65087 fix.go:229] Guest: 2024-08-04 00:15:05.141206116 +0000 UTC Remote: 2024-08-04 00:15:05.051313292 +0000 UTC m=+243.154971169 (delta=89.892824ms)
	I0804 00:15:05.170341   65087 fix.go:200] guest clock delta is within tolerance: 89.892824ms
	I0804 00:15:05.170359   65087 start.go:83] releasing machines lock for "no-preload-118016", held for 18.727925423s
	I0804 00:15:05.170392   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.170673   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:05.173694   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.174084   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.174117   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.174265   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.174828   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.175015   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.175103   65087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:05.175145   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.175263   65087 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:05.175286   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.177906   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178280   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.178307   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178329   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178470   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.178688   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.178777   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.178832   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178854   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.178945   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.179025   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.179111   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.179265   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.179417   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.282397   65087 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:05.288682   65087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:05.434388   65087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:05.440857   65087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:05.440937   65087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:05.461853   65087 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:05.461879   65087 start.go:495] detecting cgroup driver to use...
	I0804 00:15:05.461944   65087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:05.478397   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:05.494093   65087 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:05.494151   65087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:05.509391   65087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:05.524127   65087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:05.640185   65087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:05.784994   65087 docker.go:233] disabling docker service ...
	I0804 00:15:05.785071   65087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:05.802802   65087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:05.818424   65087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:05.970147   65087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:06.099759   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:06.114434   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:06.132989   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:06.433914   65087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0804 00:15:06.433969   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.452155   65087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:06.452245   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.464730   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.475848   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.488341   65087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:06.501984   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.514776   65087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.534773   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.547076   65087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:06.558639   65087 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:06.558695   65087 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:06.572920   65087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:06.583298   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:06.705307   65087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:06.845776   65087 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:06.845840   65087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:06.851710   65087 start.go:563] Will wait 60s for crictl version
	I0804 00:15:06.851764   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:06.855899   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:06.904392   65087 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:06.904493   65087 ssh_runner.go:195] Run: crio --version
	I0804 00:15:06.932866   65087 ssh_runner.go:195] Run: crio --version
	I0804 00:15:06.963071   65087 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0804 00:15:05.195984   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Start
	I0804 00:15:05.196175   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring networks are active...
	I0804 00:15:05.196904   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring network default is active
	I0804 00:15:05.197256   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring network mk-default-k8s-diff-port-969068 is active
	I0804 00:15:05.197709   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Getting domain xml...
	I0804 00:15:05.198474   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Creating domain...
	I0804 00:15:06.489009   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting to get IP...
	I0804 00:15:06.490137   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.490569   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.490641   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:06.490549   66290 retry.go:31] will retry after 298.701839ms: waiting for machine to come up
	I0804 00:15:06.791467   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.791938   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.791960   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:06.791894   66290 retry.go:31] will retry after 373.395742ms: waiting for machine to come up
	I0804 00:15:07.166622   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.167108   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.167139   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:07.167048   66290 retry.go:31] will retry after 404.799649ms: waiting for machine to come up
	I0804 00:15:06.995779   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.495822   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.995970   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.495870   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.996379   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.495852   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.495912   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.996591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:11.495964   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.964314   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:06.967088   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:06.967517   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:06.967547   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:06.967787   65087 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:06.973133   65087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:06.990153   65087 kubeadm.go:883] updating cluster {Name:no-preload-118016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:06.990339   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.297536   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.591746   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.874720   65087 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0804 00:15:07.874798   65087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:07.914104   65087 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0804 00:15:07.914127   65087 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:15:07.914172   65087 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:07.914212   65087 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:07.914237   65087 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0804 00:15:07.914253   65087 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:07.914324   65087 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:07.914374   65087 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:07.914225   65087 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:07.914374   65087 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:07.915814   65087 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:07.915833   65087 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:07.915838   65087 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:07.915816   65087 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:07.915814   65087 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0804 00:15:07.915882   65087 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:07.915962   65087 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:07.916150   65087 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.048225   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.050828   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.051873   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.056880   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.087643   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.091720   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0804 00:15:08.116485   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.173591   65087 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0804 00:15:08.173642   65087 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.173686   65087 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0804 00:15:08.173704   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.173725   65087 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.173777   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.191254   65087 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0804 00:15:08.191298   65087 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.191352   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.195238   65087 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0804 00:15:08.195290   65087 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.195340   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.246005   65087 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0804 00:15:08.246048   65087 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.246100   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.336855   65087 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0804 00:15:08.336936   65087 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.336945   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.336965   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.337078   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.337120   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.337161   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.337207   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.425270   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.425297   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.425296   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:08.425455   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.425522   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:08.458378   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0804 00:15:08.458520   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:08.460719   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:08.460827   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:08.460889   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0804 00:15:08.460983   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:08.492690   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0804 00:15:08.492789   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492808   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.492839   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:08.492852   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.492863   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492932   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492976   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0804 00:15:08.493036   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0804 00:15:08.763401   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:11.063302   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.570424927s)
	I0804 00:15:11.063326   65087 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0: (2.570469177s)
	I0804 00:15:11.063341   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0804 00:15:11.063348   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0804 00:15:11.063355   65087 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:11.063377   65087 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.299939136s)
	I0804 00:15:11.063414   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:11.063438   65087 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0804 00:15:11.063468   65087 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:11.063516   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:07.573639   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.574103   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.574150   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:07.574068   66290 retry.go:31] will retry after 552.033422ms: waiting for machine to come up
	I0804 00:15:08.127755   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.128317   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.128345   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:08.128254   66290 retry.go:31] will retry after 601.661676ms: waiting for machine to come up
	I0804 00:15:08.731160   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.731571   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.731596   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:08.731526   66290 retry.go:31] will retry after 899.954536ms: waiting for machine to come up
	I0804 00:15:09.632769   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:09.633217   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:09.633275   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:09.633188   66290 retry.go:31] will retry after 1.096119877s: waiting for machine to come up
	I0804 00:15:10.731586   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:10.732092   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:10.732116   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:10.732062   66290 retry.go:31] will retry after 1.09033143s: waiting for machine to come up
	I0804 00:15:11.824287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:11.824697   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:11.824723   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:11.824648   66290 retry.go:31] will retry after 1.458040473s: waiting for machine to come up
	I0804 00:15:11.996494   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.496005   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.996429   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.496310   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.996525   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.495995   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.996172   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.495809   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.996016   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:16.496210   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.840723   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.777281435s)
	I0804 00:15:14.840759   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0804 00:15:14.840758   65087 ssh_runner.go:235] Completed: which crictl: (3.777229082s)
	I0804 00:15:14.840769   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:14.840815   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:14.840815   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:14.894482   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0804 00:15:14.894607   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:16.729218   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (1.888374505s)
	I0804 00:15:16.729270   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0804 00:15:16.729277   65087 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.834630766s)
	I0804 00:15:16.729304   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:16.729312   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0804 00:15:16.729368   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:13.284961   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:13.285403   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:13.285435   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:13.285332   66290 retry.go:31] will retry after 2.307816709s: waiting for machine to come up
	I0804 00:15:15.594435   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:15.594855   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:15.594885   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:15.594804   66290 retry.go:31] will retry after 2.83542957s: waiting for machine to come up
	I0804 00:15:16.996765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.496069   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.995828   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.495847   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.996276   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.496155   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.996708   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.996145   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:21.496193   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.031187   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.301792704s)
	I0804 00:15:19.031309   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0804 00:15:19.031343   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:19.031389   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:20.493093   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.461677557s)
	I0804 00:15:20.493134   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0804 00:15:20.493152   65087 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:20.493202   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:18.433690   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:18.434156   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:18.434188   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:18.434105   66290 retry.go:31] will retry after 2.563856777s: waiting for machine to come up
	I0804 00:15:20.999804   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:21.000275   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:21.000307   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:21.000236   66290 retry.go:31] will retry after 3.783170851s: waiting for machine to come up
	I0804 00:15:26.095635   64502 start.go:364] duration metric: took 52.776761645s to acquireMachinesLock for "embed-certs-877598"
	I0804 00:15:26.095695   64502 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:15:26.095703   64502 fix.go:54] fixHost starting: 
	I0804 00:15:26.096104   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:26.096143   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:26.113770   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0804 00:15:26.114303   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:26.114742   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:15:26.114768   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:26.115137   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:26.115330   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:26.115508   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:15:26.117156   64502 fix.go:112] recreateIfNeeded on embed-certs-877598: state=Stopped err=<nil>
	I0804 00:15:26.117179   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	W0804 00:15:26.117343   64502 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:15:26.119743   64502 out.go:177] * Restarting existing kvm2 VM for "embed-certs-877598" ...
	I0804 00:15:21.996520   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.495922   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.995766   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.495923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.995770   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.496788   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.996759   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.996017   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.496445   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.363529   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.870304087s)
	I0804 00:15:22.363559   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0804 00:15:22.363573   65087 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:22.363618   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:23.009879   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0804 00:15:23.009924   65087 cache_images.go:123] Successfully loaded all cached images
	I0804 00:15:23.009932   65087 cache_images.go:92] duration metric: took 15.095790334s to LoadCachedImages
	I0804 00:15:23.009946   65087 kubeadm.go:934] updating node { 192.168.61.137 8443 v1.31.0-rc.0 crio true true} ...
	I0804 00:15:23.010145   65087 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-118016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:23.010230   65087 ssh_runner.go:195] Run: crio config
	I0804 00:15:23.057968   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:15:23.057991   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:23.058002   65087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:23.058022   65087 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.137 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-118016 NodeName:no-preload-118016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:23.058149   65087 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-118016"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:23.058210   65087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0804 00:15:23.068635   65087 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:23.068713   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:23.077867   65087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0804 00:15:23.094220   65087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0804 00:15:23.110798   65087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0804 00:15:23.132230   65087 ssh_runner.go:195] Run: grep 192.168.61.137	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:23.136622   65087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:23.149229   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:23.284623   65087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:23.309115   65087 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016 for IP: 192.168.61.137
	I0804 00:15:23.309212   65087 certs.go:194] generating shared ca certs ...
	I0804 00:15:23.309242   65087 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:23.309451   65087 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:23.309509   65087 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:23.309525   65087 certs.go:256] generating profile certs ...
	I0804 00:15:23.309633   65087 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.key
	I0804 00:15:23.309718   65087 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.key.794a08a1
	I0804 00:15:23.309775   65087 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.key
	I0804 00:15:23.309951   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:23.309992   65087 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:23.310006   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:23.310050   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:23.310084   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:23.310125   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:23.310186   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:23.310811   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:23.346479   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:23.390508   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:23.419626   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:23.453891   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 00:15:23.481597   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:23.507749   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:23.537567   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:15:23.565469   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:23.590844   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:23.618748   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:23.645921   65087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:23.664034   65087 ssh_runner.go:195] Run: openssl version
	I0804 00:15:23.670083   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:23.681080   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.685717   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.685777   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.691573   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:23.702260   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:23.713185   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.717747   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.717803   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.723598   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:23.734445   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:23.745394   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.750239   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.750312   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.756471   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:23.767795   65087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:23.772483   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:23.778613   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:23.784560   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:23.790455   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:23.796260   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:23.802405   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:23.808623   65087 kubeadm.go:392] StartCluster: {Name:no-preload-118016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:23.808710   65087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:23.808753   65087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:23.857908   65087 cri.go:89] found id: ""
	I0804 00:15:23.857983   65087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:23.868694   65087 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:23.868717   65087 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:23.868789   65087 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:23.878826   65087 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:23.879879   65087 kubeconfig.go:125] found "no-preload-118016" server: "https://192.168.61.137:8443"
	I0804 00:15:23.882653   65087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:23.893441   65087 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.137
	I0804 00:15:23.893475   65087 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:23.893489   65087 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:23.893533   65087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:23.933954   65087 cri.go:89] found id: ""
	I0804 00:15:23.934026   65087 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:23.951080   65087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:23.962250   65087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:23.962274   65087 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:23.962327   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:15:23.971760   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:23.971817   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:23.981767   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:15:23.991443   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:23.991494   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:24.001911   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:15:24.011927   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:24.011988   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:24.022349   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:15:24.032305   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:24.032371   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:24.042416   65087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:24.052403   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:24.163413   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.106900   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.323496   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.410928   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.569137   65087 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:25.569221   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.069288   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.570343   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.615965   65087 api_server.go:72] duration metric: took 1.046825245s to wait for apiserver process to appear ...
	I0804 00:15:26.615997   65087 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:26.616022   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:26.616618   65087 api_server.go:269] stopped: https://192.168.61.137:8443/healthz: Get "https://192.168.61.137:8443/healthz": dial tcp 192.168.61.137:8443: connect: connection refused
	I0804 00:15:24.788329   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.788775   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Found IP for machine: 192.168.39.132
	I0804 00:15:24.788799   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has current primary IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.788811   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Reserving static IP address...
	I0804 00:15:24.789238   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-969068", mac: "52:54:00:60:ac:10", ip: "192.168.39.132"} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.789266   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | skip adding static IP to network mk-default-k8s-diff-port-969068 - found existing host DHCP lease matching {name: "default-k8s-diff-port-969068", mac: "52:54:00:60:ac:10", ip: "192.168.39.132"}
	I0804 00:15:24.789287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Reserved static IP address: 192.168.39.132
	I0804 00:15:24.789303   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for SSH to be available...
	I0804 00:15:24.789333   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Getting to WaitForSSH function...
	I0804 00:15:24.791371   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.791734   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.791762   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.791904   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Using SSH client type: external
	I0804 00:15:24.791934   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa (-rw-------)
	I0804 00:15:24.791975   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:24.791994   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | About to run SSH command:
	I0804 00:15:24.792010   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | exit 0
	I0804 00:15:24.921420   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:24.921795   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetConfigRaw
	I0804 00:15:24.922375   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:24.925074   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.925403   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.925431   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.925680   65441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:15:24.925904   65441 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:24.925924   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:24.926120   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:24.928597   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.929006   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.929045   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.929171   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:24.929334   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:24.929498   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:24.929634   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:24.929814   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:24.930001   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:24.930012   65441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:25.046325   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:25.046355   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.046703   65441 buildroot.go:166] provisioning hostname "default-k8s-diff-port-969068"
	I0804 00:15:25.046733   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.046940   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.049807   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.050383   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.050427   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.050547   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.050739   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.050937   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.051131   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.051296   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.051504   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.051525   65441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-969068 && echo "default-k8s-diff-port-969068" | sudo tee /etc/hostname
	I0804 00:15:25.182512   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-969068
	
	I0804 00:15:25.182552   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.185673   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.186019   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.186051   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.186241   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.186425   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.186551   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.186660   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.186853   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.187034   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.187051   65441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-969068' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-969068/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-969068' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:25.313435   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:25.313470   65441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:25.313518   65441 buildroot.go:174] setting up certificates
	I0804 00:15:25.313531   65441 provision.go:84] configureAuth start
	I0804 00:15:25.313544   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.313856   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:25.316883   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.317233   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.317287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.317475   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.319773   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.320180   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.320214   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.320404   65441 provision.go:143] copyHostCerts
	I0804 00:15:25.320459   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:25.320467   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:25.320531   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:25.320666   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:25.320675   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:25.320702   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:25.320769   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:25.320777   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:25.320804   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:25.320871   65441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-969068 san=[127.0.0.1 192.168.39.132 default-k8s-diff-port-969068 localhost minikube]
	I0804 00:15:25.374535   65441 provision.go:177] copyRemoteCerts
	I0804 00:15:25.374590   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:25.374613   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.377629   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.378047   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.378073   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.378254   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.378478   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.378672   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.378897   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:25.469632   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:25.495826   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0804 00:15:25.527006   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:15:25.557603   65441 provision.go:87] duration metric: took 244.055462ms to configureAuth
	I0804 00:15:25.557637   65441 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:25.557873   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:25.557982   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.560974   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.561339   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.561389   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.561570   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.561740   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.561881   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.562043   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.562248   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.562456   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.562471   65441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:25.835452   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:25.835480   65441 machine.go:97] duration metric: took 909.563441ms to provisionDockerMachine
	I0804 00:15:25.835496   65441 start.go:293] postStartSetup for "default-k8s-diff-port-969068" (driver="kvm2")
	I0804 00:15:25.835512   65441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:25.835541   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:25.835846   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:25.835873   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.838713   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.839124   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.839151   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.839287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.839465   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.839634   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.839779   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:25.928376   65441 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:25.932472   65441 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:25.932498   65441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:25.932608   65441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:25.932775   65441 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:25.932951   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:25.943100   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:25.969517   65441 start.go:296] duration metric: took 134.003956ms for postStartSetup
	I0804 00:15:25.969567   65441 fix.go:56] duration metric: took 20.799045329s for fixHost
	I0804 00:15:25.969591   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.972743   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.973172   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.973204   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.973342   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.973596   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.973768   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.973944   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.974158   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.974330   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.974343   65441 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:26.095438   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730526.053053982
	
	I0804 00:15:26.095462   65441 fix.go:216] guest clock: 1722730526.053053982
	I0804 00:15:26.095472   65441 fix.go:229] Guest: 2024-08-04 00:15:26.053053982 +0000 UTC Remote: 2024-08-04 00:15:25.969572309 +0000 UTC m=+213.641216658 (delta=83.481673ms)
	I0804 00:15:26.095524   65441 fix.go:200] guest clock delta is within tolerance: 83.481673ms
	I0804 00:15:26.095534   65441 start.go:83] releasing machines lock for "default-k8s-diff-port-969068", held for 20.925048627s
	I0804 00:15:26.095570   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.095862   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:26.098718   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.099112   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.099145   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.099305   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.099929   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.100108   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.100182   65441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:26.100222   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:26.100347   65441 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:26.100388   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:26.103393   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.103720   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.103942   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.103963   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.104142   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:26.104159   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.104243   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.104347   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:26.104384   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:26.104499   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:26.104545   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:26.104718   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:26.104728   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:26.104881   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:26.214704   65441 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:26.221287   65441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:26.378021   65441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:26.385673   65441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:26.385751   65441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:26.403073   65441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:26.403104   65441 start.go:495] detecting cgroup driver to use...
	I0804 00:15:26.403193   65441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:26.421108   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:26.435556   65441 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:26.435627   65441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:26.455219   65441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:26.477841   65441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:26.626980   65441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:26.806808   65441 docker.go:233] disabling docker service ...
	I0804 00:15:26.806887   65441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:26.824079   65441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:26.839225   65441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:26.967375   65441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:27.136156   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:27.151822   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:27.173326   65441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:15:27.173404   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.184431   65441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:27.184509   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.194890   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.208349   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.222326   65441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:27.237212   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.249571   65441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.274913   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.288929   65441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:27.305789   65441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:27.305863   65441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:27.321708   65441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:27.332129   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:27.482279   65441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:27.638388   65441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:27.638465   65441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:27.644607   65441 start.go:563] Will wait 60s for crictl version
	I0804 00:15:27.644665   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:15:27.648663   65441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:27.691731   65441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:27.691824   65441 ssh_runner.go:195] Run: crio --version
	I0804 00:15:27.731365   65441 ssh_runner.go:195] Run: crio --version
	I0804 00:15:27.767416   65441 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:15:26.121074   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Start
	I0804 00:15:26.121263   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring networks are active...
	I0804 00:15:26.122075   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring network default is active
	I0804 00:15:26.122471   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring network mk-embed-certs-877598 is active
	I0804 00:15:26.122884   64502 main.go:141] libmachine: (embed-certs-877598) Getting domain xml...
	I0804 00:15:26.123684   64502 main.go:141] libmachine: (embed-certs-877598) Creating domain...
	I0804 00:15:27.536026   64502 main.go:141] libmachine: (embed-certs-877598) Waiting to get IP...
	I0804 00:15:27.537165   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:27.537650   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:27.537734   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:27.537654   66522 retry.go:31] will retry after 277.473157ms: waiting for machine to come up
	I0804 00:15:27.817330   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:27.817824   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:27.817858   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:27.817788   66522 retry.go:31] will retry after 322.160841ms: waiting for machine to come up
	I0804 00:15:28.141287   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.141818   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.141855   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.141775   66522 retry.go:31] will retry after 325.833359ms: waiting for machine to come up
	I0804 00:15:28.469440   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.469976   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.470015   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.469933   66522 retry.go:31] will retry after 372.304971ms: waiting for machine to come up
	I0804 00:15:28.843604   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.844376   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.844400   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.844297   66522 retry.go:31] will retry after 607.361674ms: waiting for machine to come up
	I0804 00:15:29.453082   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:29.453557   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:29.453586   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:29.453527   66522 retry.go:31] will retry after 615.002468ms: waiting for machine to come up
	I0804 00:15:30.070598   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:30.071112   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:30.071134   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:30.071079   66522 retry.go:31] will retry after 834.292107ms: waiting for machine to come up
	I0804 00:15:27.116719   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.030589   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:30.030625   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:30.030641   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.091459   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:30.091494   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:30.116633   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.149335   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:30.149394   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:30.617009   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.622086   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:30.622117   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:31.116320   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:31.125065   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:31.125143   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:31.617091   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:31.627142   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0804 00:15:31.636371   65087 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:15:31.636405   65087 api_server.go:131] duration metric: took 5.020400356s to wait for apiserver health ...
	I0804 00:15:31.636414   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:15:31.636420   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:31.638145   65087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:26.996399   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.496810   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.995825   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.496395   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.996561   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.496735   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.996542   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.496406   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.996259   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.496307   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.639553   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:15:31.658269   65087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:15:31.685188   65087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:15:31.703581   65087 system_pods.go:59] 8 kube-system pods found
	I0804 00:15:31.703627   65087 system_pods.go:61] "coredns-6f6b679f8f-9vdxc" [fd645695-cc1d-4394-96b0-832f48e9cf26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:15:31.703638   65087 system_pods.go:61] "etcd-no-preload-118016" [a329ecd7-7574-48f4-a776-7b7c05465f8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:15:31.703649   65087 system_pods.go:61] "kube-apiserver-no-preload-118016" [43d313aa-1844-488d-8925-b744f504323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:15:31.703661   65087 system_pods.go:61] "kube-controller-manager-no-preload-118016" [d56a5461-29d3-47f7-95df-a7fc6b52ca2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:15:31.703669   65087 system_pods.go:61] "kube-proxy-8bcg7" [c2b43118-5216-41bf-9f16-00f11ca1eab5] Running
	I0804 00:15:31.703678   65087 system_pods.go:61] "kube-scheduler-no-preload-118016" [53dc528c-2f00-4ca6-86c6-d02f4533229d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:15:31.703687   65087 system_pods.go:61] "metrics-server-6867b74b74-5xfgz" [c558b60d-3816-406a-addb-96cd42266bd1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:15:31.703698   65087 system_pods.go:61] "storage-provisioner" [1edb442e-272f-4ef7-b3fb-7c46b915c61a] Running
	I0804 00:15:31.703707   65087 system_pods.go:74] duration metric: took 18.49198ms to wait for pod list to return data ...
	I0804 00:15:31.703721   65087 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:15:31.712702   65087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:15:31.712735   65087 node_conditions.go:123] node cpu capacity is 2
	I0804 00:15:31.712748   65087 node_conditions.go:105] duration metric: took 9.019815ms to run NodePressure ...
	I0804 00:15:31.712773   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:27.768972   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:27.772437   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:27.772860   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:27.772903   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:27.773135   65441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:27.777834   65441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:27.792279   65441 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:27.792437   65441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:15:27.792493   65441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:27.833330   65441 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:15:27.833453   65441 ssh_runner.go:195] Run: which lz4
	I0804 00:15:27.837836   65441 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:15:27.842093   65441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:15:27.842128   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:15:29.410529   65441 crio.go:462] duration metric: took 1.572735301s to copy over tarball
	I0804 00:15:29.410610   65441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:15:32.062492   65441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.651848511s)
	I0804 00:15:32.062533   65441 crio.go:469] duration metric: took 2.651972207s to extract the tarball
	I0804 00:15:32.062545   65441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:15:32.100003   65441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:32.144166   65441 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:15:32.144192   65441 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:15:32.144201   65441 kubeadm.go:934] updating node { 192.168.39.132 8444 v1.30.3 crio true true} ...
	I0804 00:15:32.144327   65441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-969068 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:32.144434   65441 ssh_runner.go:195] Run: crio config
	I0804 00:15:32.197593   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:15:32.197618   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:32.197630   65441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:32.197658   65441 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-969068 NodeName:default-k8s-diff-port-969068 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:32.197862   65441 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.132
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-969068"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:32.197937   65441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:15:32.208469   65441 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:32.208551   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:32.218194   65441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0804 00:15:32.237731   65441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:15:32.259599   65441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0804 00:15:32.281113   65441 ssh_runner.go:195] Run: grep 192.168.39.132	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:32.285559   65441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:32.298722   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:30.906612   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:30.907056   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:30.907086   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:30.907012   66522 retry.go:31] will retry after 1.489076061s: waiting for machine to come up
	I0804 00:15:32.397239   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:32.397614   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:32.397642   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:32.397568   66522 retry.go:31] will retry after 1.737097329s: waiting for machine to come up
	I0804 00:15:34.135859   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:34.136363   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:34.136393   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:34.136321   66522 retry.go:31] will retry after 2.154712298s: waiting for machine to come up
	I0804 00:15:31.996780   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.496164   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.996444   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.496838   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.996533   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.496300   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.996772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.495937   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.996834   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.496277   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.982926   65087 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:15:31.989888   65087 kubeadm.go:739] kubelet initialised
	I0804 00:15:31.989926   65087 kubeadm.go:740] duration metric: took 6.968445ms waiting for restarted kubelet to initialise ...
	I0804 00:15:31.989938   65087 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:31.997210   65087 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:34.748142   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:32.432400   65441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:32.450525   65441 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068 for IP: 192.168.39.132
	I0804 00:15:32.450548   65441 certs.go:194] generating shared ca certs ...
	I0804 00:15:32.450571   65441 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:32.450738   65441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:32.450801   65441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:32.450815   65441 certs.go:256] generating profile certs ...
	I0804 00:15:32.450922   65441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.key
	I0804 00:15:32.451000   65441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.key.a17bd5dd
	I0804 00:15:32.451053   65441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.key
	I0804 00:15:32.451199   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:32.451242   65441 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:32.451255   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:32.451279   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:32.451303   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:32.451326   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:32.451365   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:32.451910   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:32.505178   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:32.557546   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:32.596512   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:32.635476   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0804 00:15:32.687156   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:32.716537   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:32.746312   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:15:32.777788   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:32.806730   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:32.835822   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:32.864241   65441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:32.886754   65441 ssh_runner.go:195] Run: openssl version
	I0804 00:15:32.893177   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:32.904847   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.909871   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.909937   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.916357   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:32.927322   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:32.939447   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.944221   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.944275   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.950218   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:32.966506   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:32.981288   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.986761   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.986831   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.993077   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:33.007428   65441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:33.013290   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:33.019997   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:33.026423   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:33.033004   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:33.039205   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:33.045367   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:33.051462   65441 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:33.051546   65441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:33.051605   65441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:33.094354   65441 cri.go:89] found id: ""
	I0804 00:15:33.094433   65441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:33.105416   65441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:33.105439   65441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:33.105480   65441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:33.115838   65441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:33.117466   65441 kubeconfig.go:125] found "default-k8s-diff-port-969068" server: "https://192.168.39.132:8444"
	I0804 00:15:33.120806   65441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:33.130533   65441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.132
	I0804 00:15:33.130567   65441 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:33.130579   65441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:33.130628   65441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:33.178718   65441 cri.go:89] found id: ""
	I0804 00:15:33.178813   65441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:33.199000   65441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:33.212169   65441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:33.212188   65441 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:33.212255   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0804 00:15:33.225192   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:33.225254   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:33.239194   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0804 00:15:33.252402   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:33.252470   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:33.265198   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0804 00:15:33.276564   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:33.276636   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:33.288785   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0804 00:15:33.299848   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:33.299904   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:33.311115   65441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:33.322121   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:33.442578   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.526815   65441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084197731s)
	I0804 00:15:34.526857   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.803105   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.893681   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.978573   65441 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:34.978668   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.479179   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.979520   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.063056   65441 api_server.go:72] duration metric: took 1.084463955s to wait for apiserver process to appear ...
	I0804 00:15:36.063161   65441 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:36.063203   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:36.063755   65441 api_server.go:269] stopped: https://192.168.39.132:8444/healthz: Get "https://192.168.39.132:8444/healthz": dial tcp 192.168.39.132:8444: connect: connection refused
	I0804 00:15:36.563501   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:36.293051   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:36.293675   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:36.293710   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:36.293604   66522 retry.go:31] will retry after 2.826050203s: waiting for machine to come up
	I0804 00:15:39.120961   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:39.121602   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:39.121628   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:39.121554   66522 retry.go:31] will retry after 2.710829438s: waiting for machine to come up
	I0804 00:15:36.996761   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.495885   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.995785   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.496550   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.996645   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.995851   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.496685   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.995896   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:41.495864   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.005216   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:39.505397   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:39.405829   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:39.405895   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:39.405913   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:39.433026   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:39.433063   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:39.563242   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:39.568554   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:39.568591   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:40.064078   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:40.085940   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:40.085978   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:40.564041   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:40.569785   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:40.569812   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:41.063334   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:41.068113   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:41.068135   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:41.563691   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:41.569214   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:41.569248   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:42.063737   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:42.068227   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:42.068260   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:42.563309   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:42.567740   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:42.567775   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:43.063306   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:43.067611   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 200:
	ok
	I0804 00:15:43.073842   65441 api_server.go:141] control plane version: v1.30.3
	I0804 00:15:43.073868   65441 api_server.go:131] duration metric: took 7.010684682s to wait for apiserver health ...
	I0804 00:15:43.073879   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:15:43.073887   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:43.075779   65441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:43.077123   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:15:43.088611   65441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:15:43.109845   65441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:15:43.119204   65441 system_pods.go:59] 8 kube-system pods found
	I0804 00:15:43.119235   65441 system_pods.go:61] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:15:43.119246   65441 system_pods.go:61] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:15:43.119259   65441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:15:43.119269   65441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:15:43.119275   65441 system_pods.go:61] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:15:43.119282   65441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:15:43.119300   65441 system_pods.go:61] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:15:43.119309   65441 system_pods.go:61] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:15:43.119317   65441 system_pods.go:74] duration metric: took 9.453775ms to wait for pod list to return data ...
	I0804 00:15:43.119328   65441 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:15:43.122493   65441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:15:43.122516   65441 node_conditions.go:123] node cpu capacity is 2
	I0804 00:15:43.122528   65441 node_conditions.go:105] duration metric: took 3.191087ms to run NodePressure ...
	I0804 00:15:43.122547   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:43.391258   65441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:15:43.395252   65441 kubeadm.go:739] kubelet initialised
	I0804 00:15:43.395274   65441 kubeadm.go:740] duration metric: took 3.992079ms waiting for restarted kubelet to initialise ...
	I0804 00:15:43.395282   65441 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:43.400173   65441 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.404618   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.404645   65441 pod_ready.go:81] duration metric: took 4.449232ms for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.404665   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.404675   65441 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.409134   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.409165   65441 pod_ready.go:81] duration metric: took 4.471898ms for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.409178   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.409190   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.414342   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.414362   65441 pod_ready.go:81] duration metric: took 5.160435ms for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.414374   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.414383   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.513956   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.513987   65441 pod_ready.go:81] duration metric: took 99.59507ms for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.514003   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.514033   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.913592   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-proxy-zz7fr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.913619   65441 pod_ready.go:81] duration metric: took 399.572927ms for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.913628   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-proxy-zz7fr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.913634   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.313833   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.313864   65441 pod_ready.go:81] duration metric: took 400.220214ms for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:44.313878   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.313886   65441 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.713583   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.713616   65441 pod_ready.go:81] duration metric: took 399.716432ms for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:44.713636   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.713647   65441 pod_ready.go:38] duration metric: took 1.318356042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:44.713666   65441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:15:44.725908   65441 ops.go:34] apiserver oom_adj: -16
	I0804 00:15:44.725935   65441 kubeadm.go:597] duration metric: took 11.620489409s to restartPrimaryControlPlane
	I0804 00:15:44.725947   65441 kubeadm.go:394] duration metric: took 11.674491721s to StartCluster
	I0804 00:15:44.725966   65441 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:44.726046   65441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:15:44.728392   65441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:44.728702   65441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:15:44.728805   65441 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:15:44.728895   65441 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.728942   65441 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.728954   65441 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:15:44.728958   65441 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.728990   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.728967   65441 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.729027   65441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-969068"
	I0804 00:15:44.729039   65441 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.729054   65441 addons.go:243] addon metrics-server should already be in state true
	I0804 00:15:44.729143   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.729436   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729470   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.729515   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729564   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729598   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.729642   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.728909   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:44.730486   65441 out.go:177] * Verifying Kubernetes components...
	I0804 00:15:44.731972   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:44.748737   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0804 00:15:44.749200   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I0804 00:15:44.749311   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I0804 00:15:44.749582   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.749691   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.749858   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.750128   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750144   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750153   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750171   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750326   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750347   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750609   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.750617   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.750810   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.751212   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.751249   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.751286   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.751733   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.751780   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.754574   65441 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.754616   65441 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:15:44.754649   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.755038   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.755080   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.769763   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0804 00:15:44.770311   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.770828   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.770850   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.771209   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.771371   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.771935   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43081
	I0804 00:15:44.773284   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.773416   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I0804 00:15:44.773646   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.773854   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.773866   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.773981   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.774227   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.774529   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.774551   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.774665   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.774711   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.774938   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.775078   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.776166   65441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:15:44.776690   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.777692   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:15:44.777708   65441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:15:44.777724   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.778473   65441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:41.833728   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:41.834246   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:41.834270   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:41.834210   66522 retry.go:31] will retry after 2.891635961s: waiting for machine to come up
	I0804 00:15:44.727424   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.727895   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has current primary IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.727919   64502 main.go:141] libmachine: (embed-certs-877598) Found IP for machine: 192.168.50.140
	I0804 00:15:44.727943   64502 main.go:141] libmachine: (embed-certs-877598) Reserving static IP address...
	I0804 00:15:44.728570   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "embed-certs-877598", mac: "52:54:00:86:aa:38", ip: "192.168.50.140"} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.728602   64502 main.go:141] libmachine: (embed-certs-877598) DBG | skip adding static IP to network mk-embed-certs-877598 - found existing host DHCP lease matching {name: "embed-certs-877598", mac: "52:54:00:86:aa:38", ip: "192.168.50.140"}
	I0804 00:15:44.728617   64502 main.go:141] libmachine: (embed-certs-877598) Reserved static IP address: 192.168.50.140
	I0804 00:15:44.728634   64502 main.go:141] libmachine: (embed-certs-877598) Waiting for SSH to be available...
	I0804 00:15:44.728648   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Getting to WaitForSSH function...
	I0804 00:15:44.731684   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.732102   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.732137   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.732388   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Using SSH client type: external
	I0804 00:15:44.732408   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa (-rw-------)
	I0804 00:15:44.732438   64502 main.go:141] libmachine: (embed-certs-877598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:44.732448   64502 main.go:141] libmachine: (embed-certs-877598) DBG | About to run SSH command:
	I0804 00:15:44.732462   64502 main.go:141] libmachine: (embed-certs-877598) DBG | exit 0
	I0804 00:15:44.873689   64502 main.go:141] libmachine: (embed-certs-877598) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:44.874033   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetConfigRaw
	I0804 00:15:44.874716   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:44.877406   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.877823   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.877855   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.878130   64502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/config.json ...
	I0804 00:15:44.878358   64502 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:44.878382   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:44.878563   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:44.880862   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.881215   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.881253   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.881427   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:44.881597   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:44.881785   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:44.881958   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:44.882150   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:44.882381   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:44.882399   64502 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:44.998143   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:44.998172   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:44.998534   64502 buildroot.go:166] provisioning hostname "embed-certs-877598"
	I0804 00:15:44.998564   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:44.998761   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.001998   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.002508   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.002545   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.002691   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.002847   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.003026   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.003175   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.003388   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.003592   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.003606   64502 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-877598 && echo "embed-certs-877598" | sudo tee /etc/hostname
	I0804 00:15:45.142065   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-877598
	
	I0804 00:15:45.142123   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.145427   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.145858   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.145912   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.146133   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.146279   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.146422   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.146595   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.146778   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.146991   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.147007   64502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-877598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-877598/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-877598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:45.275711   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:45.275748   64502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:45.275775   64502 buildroot.go:174] setting up certificates
	I0804 00:15:45.275790   64502 provision.go:84] configureAuth start
	I0804 00:15:45.275804   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:45.276145   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:45.279645   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.280141   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.280166   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.280298   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.283135   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.283495   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.283521   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.283693   64502 provision.go:143] copyHostCerts
	I0804 00:15:45.283754   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:45.283767   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:45.283837   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:45.283954   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:45.283975   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:45.284004   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:45.284168   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:45.284182   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:45.284214   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:45.284280   64502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.embed-certs-877598 san=[127.0.0.1 192.168.50.140 embed-certs-877598 localhost minikube]
	I0804 00:15:45.484805   64502 provision.go:177] copyRemoteCerts
	I0804 00:15:45.484861   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:45.484883   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.488177   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.488586   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.488621   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.488852   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.489032   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.489191   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.489340   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:45.580782   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:45.612118   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 00:15:45.638201   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:15:45.665741   64502 provision.go:87] duration metric: took 389.935703ms to configureAuth
	I0804 00:15:45.665778   64502 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:45.666008   64502 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:45.666110   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.668942   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.669312   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.669343   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.669589   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.669812   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.669995   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.670158   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.670317   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.670501   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.670522   64502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:44.779708   65441 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:15:44.779730   65441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:15:44.779747   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.780637   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.781098   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.781120   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.781219   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.781424   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.781593   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.781753   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.783024   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.783459   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.783479   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.783895   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.784054   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.784219   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.784343   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.793057   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
	I0804 00:15:44.793581   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.794075   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.794094   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.794413   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.794586   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.796274   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.796609   65441 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:15:44.796623   65441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:15:44.796643   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.799445   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.799990   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.800254   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.800698   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.800864   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.800974   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.801305   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.962413   65441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:44.983596   65441 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-969068" to be "Ready" ...
	I0804 00:15:45.057238   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:15:45.057261   65441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:15:45.082722   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:15:45.082745   65441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:15:45.088213   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:15:45.115230   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:15:45.115261   65441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:15:45.115325   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:15:45.164676   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:15:45.502008   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.502040   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.502381   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:45.502440   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.502463   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:45.502476   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.502484   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.502701   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.502718   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:45.510043   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.510065   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.510305   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:45.510353   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.510364   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.217233   65441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.101870491s)
	I0804 00:15:46.217295   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.217308   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.217585   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.217609   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.217625   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.217652   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.217719   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.218073   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.218091   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.218104   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.255756   65441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.091044347s)
	I0804 00:15:46.255802   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.255819   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.256053   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.256093   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.256101   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.256110   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.256117   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.256412   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.256446   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.256454   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.256465   65441 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-969068"
	I0804 00:15:46.258662   65441 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0804 00:15:41.995808   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.496612   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.996566   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.495812   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.996095   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.495902   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.996724   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.495854   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.996354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:46.496185   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.005235   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:44.003809   65087 pod_ready.go:92] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.003847   65087 pod_ready.go:81] duration metric: took 12.006609818s for pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.003861   65087 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.009518   65087 pod_ready.go:92] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.009541   65087 pod_ready.go:81] duration metric: took 5.671724ms for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.009554   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.014897   65087 pod_ready.go:92] pod "kube-apiserver-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.014923   65087 pod_ready.go:81] duration metric: took 5.360171ms for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.014938   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.521943   65087 pod_ready.go:92] pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.521968   65087 pod_ready.go:81] duration metric: took 1.507021563s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.521983   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bcg7" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.527550   65087 pod_ready.go:92] pod "kube-proxy-8bcg7" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.527575   65087 pod_ready.go:81] duration metric: took 5.585026ms for pod "kube-proxy-8bcg7" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.527588   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.604221   65087 pod_ready.go:92] pod "kube-scheduler-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.604245   65087 pod_ready.go:81] duration metric: took 76.648502ms for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.604260   65087 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:46.260578   65441 addons.go:510] duration metric: took 1.531768603s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0804 00:15:46.988351   65441 node_ready.go:53] node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:45.985471   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:45.985501   64502 machine.go:97] duration metric: took 1.107126695s to provisionDockerMachine
	I0804 00:15:45.985514   64502 start.go:293] postStartSetup for "embed-certs-877598" (driver="kvm2")
	I0804 00:15:45.985527   64502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:45.985554   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:45.985928   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:45.985962   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.989294   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.989699   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.989731   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.989875   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.990079   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.990230   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.990355   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.085684   64502 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:46.091660   64502 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:46.091690   64502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:46.091776   64502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:46.091873   64502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:46.092005   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:46.102373   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:46.129547   64502 start.go:296] duration metric: took 144.018823ms for postStartSetup
	I0804 00:15:46.129594   64502 fix.go:56] duration metric: took 20.033890858s for fixHost
	I0804 00:15:46.129619   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.132803   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.133154   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.133190   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.133347   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.133580   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.133766   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.134016   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.134242   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:46.134454   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:46.134471   64502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:46.250499   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730546.219077490
	
	I0804 00:15:46.250528   64502 fix.go:216] guest clock: 1722730546.219077490
	I0804 00:15:46.250539   64502 fix.go:229] Guest: 2024-08-04 00:15:46.21907749 +0000 UTC Remote: 2024-08-04 00:15:46.129599456 +0000 UTC m=+355.401502879 (delta=89.478034ms)
	I0804 00:15:46.250567   64502 fix.go:200] guest clock delta is within tolerance: 89.478034ms
	I0804 00:15:46.250575   64502 start.go:83] releasing machines lock for "embed-certs-877598", held for 20.15490553s
	I0804 00:15:46.250609   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.250902   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:46.253782   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.254164   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.254194   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.254376   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.254967   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.255169   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.255247   64502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:46.255307   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.255376   64502 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:46.255399   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.260113   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260481   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.260511   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260529   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260702   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.260870   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.260995   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.261022   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.261045   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.261182   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.261208   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.261305   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.261451   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.261588   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.372061   64502 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:46.378356   64502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:46.527705   64502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:46.534567   64502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:46.534620   64502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:46.550801   64502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:46.550829   64502 start.go:495] detecting cgroup driver to use...
	I0804 00:15:46.550916   64502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:46.568369   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:46.583437   64502 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:46.583496   64502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:46.599267   64502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:46.614874   64502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:46.734467   64502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:46.900868   64502 docker.go:233] disabling docker service ...
	I0804 00:15:46.900941   64502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:46.915612   64502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:46.929948   64502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:47.056637   64502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:47.175277   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:47.190167   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:47.211062   64502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:15:47.211115   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.222459   64502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:47.222547   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.232964   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.243663   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.254387   64502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:47.266424   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.277323   64502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.296078   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.307058   64502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:47.317138   64502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:47.317223   64502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:47.332104   64502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:47.342965   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:47.464208   64502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:47.620127   64502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:47.620196   64502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:47.625103   64502 start.go:563] Will wait 60s for crictl version
	I0804 00:15:47.625165   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:15:47.628942   64502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:47.668593   64502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:47.668686   64502 ssh_runner.go:195] Run: crio --version
	I0804 00:15:47.700313   64502 ssh_runner.go:195] Run: crio --version
	I0804 00:15:47.737281   64502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:15:47.738730   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:47.741698   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:47.742098   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:47.742144   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:47.742310   64502 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:47.746883   64502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:47.760111   64502 kubeadm.go:883] updating cluster {Name:embed-certs-877598 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:47.760247   64502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:15:47.760305   64502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:47.801700   64502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:15:47.801766   64502 ssh_runner.go:195] Run: which lz4
	I0804 00:15:47.806337   64502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:15:47.811010   64502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:15:47.811050   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:15:49.359157   64502 crio.go:462] duration metric: took 1.552864688s to copy over tarball
	I0804 00:15:49.359245   64502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:15:46.996215   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.496634   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.996278   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.496184   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.996616   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.496240   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.996433   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.996600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:51.496459   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.611474   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:49.611879   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:51.616732   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:48.988818   65441 node_ready.go:53] node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:49.988196   65441 node_ready.go:49] node "default-k8s-diff-port-969068" has status "Ready":"True"
	I0804 00:15:49.988220   65441 node_ready.go:38] duration metric: took 5.004585481s for node "default-k8s-diff-port-969068" to be "Ready" ...
	I0804 00:15:49.988229   65441 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:49.994536   65441 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:50.001200   65441 pod_ready.go:92] pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:50.001229   65441 pod_ready.go:81] duration metric: took 6.665744ms for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:50.001243   65441 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:52.009436   65441 pod_ready.go:102] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:51.759772   64502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.400487256s)
	I0804 00:15:51.759836   64502 crio.go:469] duration metric: took 2.40064418s to extract the tarball
	I0804 00:15:51.759848   64502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:15:51.800038   64502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:51.845098   64502 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:15:51.845124   64502 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:15:51.845134   64502 kubeadm.go:934] updating node { 192.168.50.140 8443 v1.30.3 crio true true} ...
	I0804 00:15:51.845258   64502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-877598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:51.845339   64502 ssh_runner.go:195] Run: crio config
	I0804 00:15:51.895019   64502 cni.go:84] Creating CNI manager for ""
	I0804 00:15:51.895039   64502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:51.895048   64502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:51.895067   64502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.140 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-877598 NodeName:embed-certs-877598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:51.895202   64502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-877598"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:51.895272   64502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:15:51.906363   64502 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:51.906426   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:51.917727   64502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0804 00:15:51.936370   64502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:15:51.953894   64502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0804 00:15:51.972910   64502 ssh_runner.go:195] Run: grep 192.168.50.140	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:51.977288   64502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:51.990992   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:52.115808   64502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:52.133326   64502 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598 for IP: 192.168.50.140
	I0804 00:15:52.133373   64502 certs.go:194] generating shared ca certs ...
	I0804 00:15:52.133396   64502 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:52.133564   64502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:52.133613   64502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:52.133628   64502 certs.go:256] generating profile certs ...
	I0804 00:15:52.133736   64502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/client.key
	I0804 00:15:52.133824   64502 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.key.5511d337
	I0804 00:15:52.133873   64502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.key
	I0804 00:15:52.134013   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:52.134077   64502 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:52.134091   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:52.134130   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:52.134168   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:52.134200   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:52.134256   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:52.134880   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:52.175985   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:52.209458   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:52.239097   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:52.271037   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0804 00:15:52.317594   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:52.353485   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:52.382159   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:15:52.407478   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:52.433103   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:52.457233   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:52.481534   64502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:52.500482   64502 ssh_runner.go:195] Run: openssl version
	I0804 00:15:52.509021   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:52.522431   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.527125   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.527184   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.533310   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:52.546085   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:52.557781   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.562516   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.562587   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.568403   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:52.580431   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:52.592706   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.597280   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.597382   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.603284   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:52.616100   64502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:52.621422   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:52.631811   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:52.639130   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:52.646159   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:52.652721   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:52.659459   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:52.665864   64502 kubeadm.go:392] StartCluster: {Name:embed-certs-877598 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:52.665991   64502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:52.666044   64502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:52.711272   64502 cri.go:89] found id: ""
	I0804 00:15:52.711346   64502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:52.722294   64502 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:52.722321   64502 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:52.722380   64502 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:52.733148   64502 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:52.734706   64502 kubeconfig.go:125] found "embed-certs-877598" server: "https://192.168.50.140:8443"
	I0804 00:15:52.737995   64502 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:52.749941   64502 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.140
	I0804 00:15:52.749986   64502 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:52.749998   64502 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:52.750043   64502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:52.793295   64502 cri.go:89] found id: ""
	I0804 00:15:52.793388   64502 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:52.811438   64502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:52.824055   64502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:52.824080   64502 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:52.824128   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:15:52.835393   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:52.835446   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:52.846732   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:15:52.856889   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:52.856942   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:52.869951   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:15:52.881836   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:52.881909   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:52.894121   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:15:52.905643   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:52.905711   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:52.917063   64502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:52.929399   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:53.132145   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.096969   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.325640   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.385886   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.472926   64502 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:54.473002   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.973406   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.473410   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.578082   64502 api_server.go:72] duration metric: took 1.105154357s to wait for apiserver process to appear ...
	I0804 00:15:55.578170   64502 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:55.578207   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:55.578847   64502 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I0804 00:15:51.996447   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.496265   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.996030   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.996313   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.495823   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.996360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.496652   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.996049   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:55.996141   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:56.045001   64758 cri.go:89] found id: ""
	I0804 00:15:56.045031   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.045042   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:56.045049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:56.045114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:56.086505   64758 cri.go:89] found id: ""
	I0804 00:15:56.086535   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.086547   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:56.086554   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:56.086618   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:56.125953   64758 cri.go:89] found id: ""
	I0804 00:15:56.125983   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.125994   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:56.126001   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:56.126060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:56.167313   64758 cri.go:89] found id: ""
	I0804 00:15:56.167343   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.167354   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:56.167361   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:56.167424   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:56.211102   64758 cri.go:89] found id: ""
	I0804 00:15:56.211132   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.211142   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:56.211149   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:56.211231   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:56.246894   64758 cri.go:89] found id: ""
	I0804 00:15:56.246926   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.246937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:56.246945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:56.247012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:56.281952   64758 cri.go:89] found id: ""
	I0804 00:15:56.281980   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.281991   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:56.281998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:56.282060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:56.317685   64758 cri.go:89] found id: ""
	I0804 00:15:56.317719   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.317733   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:56.317745   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:56.317762   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:56.335040   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:56.335069   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:56.475995   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:56.476017   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:56.476033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:56.567508   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:56.567551   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:56.618136   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:56.618166   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:54.112928   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:56.112987   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:54.179330   65441 pod_ready.go:102] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:54.789712   65441 pod_ready.go:92] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.789738   65441 pod_ready.go:81] duration metric: took 4.788487591s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.789749   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.799762   65441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.799785   65441 pod_ready.go:81] duration metric: took 10.029856ms for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.799795   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.805685   65441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.805708   65441 pod_ready.go:81] duration metric: took 5.905108ms for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.805718   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.809797   65441 pod_ready.go:92] pod "kube-proxy-zz7fr" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.809818   65441 pod_ready.go:81] duration metric: took 4.094183ms for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.809827   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.820536   65441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.820557   65441 pod_ready.go:81] duration metric: took 10.722903ms for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.820567   65441 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:56.827543   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:56.078916   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:58.738609   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:58.738641   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:58.738657   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:58.772665   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:58.772695   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:59.079121   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:59.083798   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:59.083829   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:59.579242   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:59.585343   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:59.585381   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:00.078877   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:00.099981   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:00.100022   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:00.578505   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:00.582665   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:00.582692   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:59.172886   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:59.187045   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:59.187128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:59.225135   64758 cri.go:89] found id: ""
	I0804 00:15:59.225164   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.225173   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:59.225179   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:59.225255   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:59.262538   64758 cri.go:89] found id: ""
	I0804 00:15:59.262566   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.262573   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:59.262578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:59.262635   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:59.301665   64758 cri.go:89] found id: ""
	I0804 00:15:59.301697   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.301708   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:59.301715   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:59.301778   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:59.362742   64758 cri.go:89] found id: ""
	I0804 00:15:59.362766   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.362774   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:59.362779   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:59.362834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:59.404398   64758 cri.go:89] found id: ""
	I0804 00:15:59.404431   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.404509   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:59.404525   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:59.404594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:59.454257   64758 cri.go:89] found id: ""
	I0804 00:15:59.454285   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.454297   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:59.454305   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:59.454363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:59.496790   64758 cri.go:89] found id: ""
	I0804 00:15:59.496818   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.496829   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:59.496837   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:59.496896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:59.537395   64758 cri.go:89] found id: ""
	I0804 00:15:59.537424   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.537431   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:59.537439   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:59.537453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:59.600005   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:59.600042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:59.617304   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:59.617336   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:59.692828   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:59.692849   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:59.692864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:59.764000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:59.764038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:58.611600   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.110986   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.079326   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:01.083661   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:01.083689   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:01.578711   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:01.583011   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:01.583040   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:02.078606   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:02.083234   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0804 00:16:02.090079   64502 api_server.go:141] control plane version: v1.30.3
	I0804 00:16:02.090112   64502 api_server.go:131] duration metric: took 6.511921332s to wait for apiserver health ...
	I0804 00:16:02.090123   64502 cni.go:84] Creating CNI manager for ""
	I0804 00:16:02.090132   64502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:16:02.092169   64502 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:58.829268   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.327623   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:02.093704   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:16:02.109001   64502 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:16:02.131996   64502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:16:02.145300   64502 system_pods.go:59] 8 kube-system pods found
	I0804 00:16:02.145333   64502 system_pods.go:61] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:16:02.145340   64502 system_pods.go:61] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:16:02.145348   64502 system_pods.go:61] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:16:02.145370   64502 system_pods.go:61] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:16:02.145380   64502 system_pods.go:61] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:16:02.145389   64502 system_pods.go:61] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:16:02.145397   64502 system_pods.go:61] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:16:02.145403   64502 system_pods.go:61] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:16:02.145412   64502 system_pods.go:74] duration metric: took 13.393537ms to wait for pod list to return data ...
	I0804 00:16:02.145425   64502 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:16:02.149623   64502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:16:02.149651   64502 node_conditions.go:123] node cpu capacity is 2
	I0804 00:16:02.149661   64502 node_conditions.go:105] duration metric: took 4.231097ms to run NodePressure ...
	I0804 00:16:02.149677   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:16:02.424261   64502 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:16:02.429537   64502 kubeadm.go:739] kubelet initialised
	I0804 00:16:02.429555   64502 kubeadm.go:740] duration metric: took 5.269005ms waiting for restarted kubelet to initialise ...
	I0804 00:16:02.429563   64502 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:02.435433   64502 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.440580   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.440606   64502 pod_ready.go:81] duration metric: took 5.145511ms for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.440619   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.440628   64502 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.445111   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "etcd-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.445136   64502 pod_ready.go:81] duration metric: took 4.497361ms for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.445148   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "etcd-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.445157   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.450172   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.450200   64502 pod_ready.go:81] duration metric: took 5.032514ms for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.450211   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.450219   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.536314   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.536386   64502 pod_ready.go:81] duration metric: took 86.155481ms for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.536398   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.536409   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.935794   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-proxy-wk8zf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.935830   64502 pod_ready.go:81] duration metric: took 399.405535ms for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.935842   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-proxy-wk8zf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.935861   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:03.335730   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.335760   64502 pod_ready.go:81] duration metric: took 399.889478ms for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:03.335772   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.335780   64502 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:03.735762   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.735786   64502 pod_ready.go:81] duration metric: took 399.996995ms for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:03.735795   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.735802   64502 pod_ready.go:38] duration metric: took 1.306222891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:03.735818   64502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:16:03.748578   64502 ops.go:34] apiserver oom_adj: -16
	I0804 00:16:03.748602   64502 kubeadm.go:597] duration metric: took 11.026274037s to restartPrimaryControlPlane
	I0804 00:16:03.748611   64502 kubeadm.go:394] duration metric: took 11.082760058s to StartCluster
	I0804 00:16:03.748637   64502 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:16:03.748719   64502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:16:03.750554   64502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:16:03.750824   64502 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:16:03.750900   64502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:16:03.750998   64502 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-877598"
	I0804 00:16:03.751041   64502 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-877598"
	W0804 00:16:03.751053   64502 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:16:03.751051   64502 addons.go:69] Setting default-storageclass=true in profile "embed-certs-877598"
	I0804 00:16:03.751072   64502 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:16:03.751108   64502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-877598"
	I0804 00:16:03.751063   64502 addons.go:69] Setting metrics-server=true in profile "embed-certs-877598"
	I0804 00:16:03.751181   64502 addons.go:234] Setting addon metrics-server=true in "embed-certs-877598"
	W0804 00:16:03.751196   64502 addons.go:243] addon metrics-server should already be in state true
	I0804 00:16:03.751245   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.751467   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.751503   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.751540   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.751612   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.751088   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.751990   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.752017   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.752817   64502 out.go:177] * Verifying Kubernetes components...
	I0804 00:16:03.754613   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:16:03.769684   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I0804 00:16:03.769701   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37925
	I0804 00:16:03.769697   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0804 00:16:03.770197   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770332   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770619   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770792   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.770808   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.770935   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.770949   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.771125   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.771327   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.771520   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.771545   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.771555   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.771938   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.772138   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.772195   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.772521   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.772565   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.776267   64502 addons.go:234] Setting addon default-storageclass=true in "embed-certs-877598"
	W0804 00:16:03.776292   64502 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:16:03.776327   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.776695   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.776738   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.789183   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0804 00:16:03.789660   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.789796   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I0804 00:16:03.790184   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.790202   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.790246   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.790608   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.790869   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.790900   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.790985   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.791276   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.791519   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.793005   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.793338   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.795747   64502 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:16:03.795748   64502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:16:03.796208   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0804 00:16:03.796652   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.797194   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.797220   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.797589   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:16:03.797611   64502 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:16:03.797632   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.797640   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.797673   64502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:16:03.797684   64502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:16:03.797697   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.798266   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.798311   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.801933   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802083   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802417   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.802445   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802589   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.802766   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.802851   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.802868   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802936   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.803140   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.803166   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.803310   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.803409   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.803512   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.818073   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0804 00:16:03.818647   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.819107   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.819130   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.819488   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.819721   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.821982   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.822216   64502 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:16:03.822232   64502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:16:03.822251   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.825593   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.826055   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.826090   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.826356   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.826526   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.826667   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.826829   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.955019   64502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:16:03.976453   64502 node_ready.go:35] waiting up to 6m0s for node "embed-certs-877598" to be "Ready" ...
	I0804 00:16:04.051717   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:16:04.074720   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:16:04.074740   64502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:16:04.099578   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:16:04.099606   64502 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:16:04.118348   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:16:04.163390   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:16:04.163418   64502 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:16:04.227379   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:16:05.143364   64502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091613097s)
	I0804 00:16:05.143418   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143419   64502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.025041953s)
	I0804 00:16:05.143430   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143439   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143449   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143726   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.143743   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.143755   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143764   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143862   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.143893   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.143915   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.143935   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143964   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.144014   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.144033   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.144085   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.144259   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.144305   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.144319   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.150739   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.150761   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.151073   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.151102   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.151130   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.169806   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.169832   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.170103   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.170122   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.170148   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.170159   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.170171   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.170461   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.170546   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.170563   64502 addons.go:475] Verifying addon metrics-server=true in "embed-certs-877598"
	I0804 00:16:05.172575   64502 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0804 00:16:05.173964   64502 addons.go:510] duration metric: took 1.423065893s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0804 00:16:02.307325   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:02.324168   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:02.324233   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:02.370204   64758 cri.go:89] found id: ""
	I0804 00:16:02.370234   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.370250   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:02.370258   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:02.370325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:02.405586   64758 cri.go:89] found id: ""
	I0804 00:16:02.405616   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.405628   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:02.405636   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:02.405694   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:02.445644   64758 cri.go:89] found id: ""
	I0804 00:16:02.445665   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.445675   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:02.445682   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:02.445739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:02.483659   64758 cri.go:89] found id: ""
	I0804 00:16:02.483686   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.483695   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:02.483701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:02.483751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:02.519903   64758 cri.go:89] found id: ""
	I0804 00:16:02.519929   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.519938   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:02.519944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:02.519991   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:02.557373   64758 cri.go:89] found id: ""
	I0804 00:16:02.557401   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.557410   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:02.557416   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:02.557472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:02.594203   64758 cri.go:89] found id: ""
	I0804 00:16:02.594238   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.594249   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:02.594256   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:02.594316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:02.635487   64758 cri.go:89] found id: ""
	I0804 00:16:02.635512   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.635520   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:02.635529   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:02.635543   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:02.686990   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:02.687035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:02.701784   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:02.701810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:02.778626   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:02.778648   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:02.778662   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:02.856056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:02.856097   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:05.402858   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:05.418825   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:05.418900   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:05.458789   64758 cri.go:89] found id: ""
	I0804 00:16:05.458872   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.458887   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:05.458895   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:05.458967   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:05.498258   64758 cri.go:89] found id: ""
	I0804 00:16:05.498284   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.498295   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:05.498302   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:05.498364   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:05.540892   64758 cri.go:89] found id: ""
	I0804 00:16:05.540919   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.540927   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:05.540933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:05.540992   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:05.578876   64758 cri.go:89] found id: ""
	I0804 00:16:05.578911   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.578919   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:05.578924   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:05.578971   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:05.616248   64758 cri.go:89] found id: ""
	I0804 00:16:05.616272   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.616280   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:05.616285   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:05.616339   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:05.654387   64758 cri.go:89] found id: ""
	I0804 00:16:05.654419   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.654428   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:05.654436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:05.654528   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:05.695579   64758 cri.go:89] found id: ""
	I0804 00:16:05.695613   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.695625   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:05.695669   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:05.695752   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:05.740754   64758 cri.go:89] found id: ""
	I0804 00:16:05.740777   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.740785   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:05.740793   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:05.740805   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:05.792091   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:05.792126   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:05.809130   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:05.809164   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:05.888441   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:05.888465   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:05.888479   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:05.969336   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:05.969390   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:03.111834   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:05.613749   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:03.830570   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:06.328076   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:05.980692   64502 node_ready.go:53] node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:08.480205   64502 node_ready.go:53] node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:09.480127   64502 node_ready.go:49] node "embed-certs-877598" has status "Ready":"True"
	I0804 00:16:09.480147   64502 node_ready.go:38] duration metric: took 5.503660587s for node "embed-certs-877598" to be "Ready" ...
	I0804 00:16:09.480155   64502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:09.485704   64502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:09.491316   64502 pod_ready.go:92] pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:09.491340   64502 pod_ready.go:81] duration metric: took 5.611918ms for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:09.491348   64502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:08.514981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:08.531117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:08.531188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:08.569167   64758 cri.go:89] found id: ""
	I0804 00:16:08.569199   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.569210   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:08.569218   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:08.569282   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:08.608478   64758 cri.go:89] found id: ""
	I0804 00:16:08.608559   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.608572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:08.608580   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:08.608636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:08.645939   64758 cri.go:89] found id: ""
	I0804 00:16:08.645972   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.645983   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:08.645990   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:08.646050   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:08.685274   64758 cri.go:89] found id: ""
	I0804 00:16:08.685305   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.685316   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:08.685324   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:08.685400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:08.722314   64758 cri.go:89] found id: ""
	I0804 00:16:08.722345   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.722357   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:08.722363   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:08.722427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:08.758577   64758 cri.go:89] found id: ""
	I0804 00:16:08.758606   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.758617   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:08.758624   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:08.758685   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.798734   64758 cri.go:89] found id: ""
	I0804 00:16:08.798761   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.798773   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:08.798781   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:08.798842   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:08.837577   64758 cri.go:89] found id: ""
	I0804 00:16:08.837600   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.837608   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:08.837616   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:08.837627   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:08.894426   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:08.894465   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:08.909851   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:08.909879   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:08.989858   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:08.989878   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:08.989893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:09.081056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:09.081098   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:11.627914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:11.641805   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:11.641896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:11.679002   64758 cri.go:89] found id: ""
	I0804 00:16:11.679028   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.679036   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:11.679042   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:11.679090   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:11.720188   64758 cri.go:89] found id: ""
	I0804 00:16:11.720220   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.720236   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:11.720245   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:11.720307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:11.760085   64758 cri.go:89] found id: ""
	I0804 00:16:11.760118   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.760130   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:11.760138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:11.760198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:11.796220   64758 cri.go:89] found id: ""
	I0804 00:16:11.796249   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.796266   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:11.796274   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:11.796335   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:11.834216   64758 cri.go:89] found id: ""
	I0804 00:16:11.834243   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.834253   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:11.834260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:11.834336   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:11.869205   64758 cri.go:89] found id: ""
	I0804 00:16:11.869230   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.869237   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:11.869243   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:11.869301   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.110499   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:10.618011   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:08.827284   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:10.828942   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:11.498264   64502 pod_ready.go:102] pod "etcd-embed-certs-877598" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:12.498916   64502 pod_ready.go:92] pod "etcd-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:12.498949   64502 pod_ready.go:81] duration metric: took 3.007593153s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:12.498961   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.562862   64502 pod_ready.go:92] pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.562896   64502 pod_ready.go:81] duration metric: took 2.063926324s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.562910   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.573628   64502 pod_ready.go:92] pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.573655   64502 pod_ready.go:81] duration metric: took 10.735916ms for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.573670   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.583241   64502 pod_ready.go:92] pod "kube-proxy-wk8zf" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.583266   64502 pod_ready.go:81] duration metric: took 9.588875ms for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.583278   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.593419   64502 pod_ready.go:92] pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.593445   64502 pod_ready.go:81] duration metric: took 10.158665ms for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.593457   64502 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:11.912091   64758 cri.go:89] found id: ""
	I0804 00:16:11.912120   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.912132   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:11.912145   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:11.912203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:11.949570   64758 cri.go:89] found id: ""
	I0804 00:16:11.949603   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.949614   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:11.949625   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:11.949643   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:12.006542   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:12.006575   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:12.022435   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:12.022474   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:12.101007   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:12.101032   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:12.101057   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:12.183836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:12.183876   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:14.725345   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:14.738389   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:14.738464   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:14.780103   64758 cri.go:89] found id: ""
	I0804 00:16:14.780133   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.780142   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:14.780147   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:14.780197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:14.817811   64758 cri.go:89] found id: ""
	I0804 00:16:14.817847   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.817863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:14.817872   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:14.817946   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:14.854450   64758 cri.go:89] found id: ""
	I0804 00:16:14.854478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.854488   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:14.854495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:14.854561   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:14.891862   64758 cri.go:89] found id: ""
	I0804 00:16:14.891891   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.891900   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:14.891905   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:14.891958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:14.928450   64758 cri.go:89] found id: ""
	I0804 00:16:14.928478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.928488   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:14.928495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:14.928554   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:14.965820   64758 cri.go:89] found id: ""
	I0804 00:16:14.965848   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.965860   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:14.965867   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:14.965945   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:15.008725   64758 cri.go:89] found id: ""
	I0804 00:16:15.008874   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.008888   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:15.008897   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:15.008957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:15.044618   64758 cri.go:89] found id: ""
	I0804 00:16:15.044768   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.044792   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:15.044802   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:15.044815   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:15.102786   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:15.102825   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:15.118305   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:15.118347   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:15.196397   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:15.196420   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:15.196435   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:15.277941   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:15.277986   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:13.110969   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:15.112546   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:13.327840   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:15.826447   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:16.600315   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.099064   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:17.819354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:17.834271   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:17.834332   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:17.870930   64758 cri.go:89] found id: ""
	I0804 00:16:17.870961   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.870973   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:17.870980   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:17.871040   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:17.907980   64758 cri.go:89] found id: ""
	I0804 00:16:17.908007   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.908016   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:17.908021   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:17.908067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:17.943257   64758 cri.go:89] found id: ""
	I0804 00:16:17.943284   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.943295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:17.943301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:17.943363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:17.982297   64758 cri.go:89] found id: ""
	I0804 00:16:17.982328   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.982338   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:17.982345   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:17.982405   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:18.022780   64758 cri.go:89] found id: ""
	I0804 00:16:18.022810   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.022841   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:18.022850   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:18.022913   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:18.061891   64758 cri.go:89] found id: ""
	I0804 00:16:18.061926   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.061937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:18.061945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:18.062012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:18.100807   64758 cri.go:89] found id: ""
	I0804 00:16:18.100845   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.100855   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:18.100862   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:18.100917   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:18.142011   64758 cri.go:89] found id: ""
	I0804 00:16:18.142044   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.142056   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:18.142066   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:18.142090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:18.195476   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:18.195511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:18.209661   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:18.209690   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:18.282638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:18.282657   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:18.282669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:18.363900   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:18.363938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:20.908753   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:20.922878   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:20.922962   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:20.961013   64758 cri.go:89] found id: ""
	I0804 00:16:20.961041   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.961052   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:20.961058   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:20.961109   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:20.998027   64758 cri.go:89] found id: ""
	I0804 00:16:20.998059   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.998068   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:20.998074   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:20.998121   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:21.035640   64758 cri.go:89] found id: ""
	I0804 00:16:21.035669   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.035680   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:21.035688   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:21.035751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:21.075737   64758 cri.go:89] found id: ""
	I0804 00:16:21.075770   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.075779   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:21.075786   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:21.075846   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:21.120024   64758 cri.go:89] found id: ""
	I0804 00:16:21.120046   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.120054   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:21.120061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:21.120126   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:21.160796   64758 cri.go:89] found id: ""
	I0804 00:16:21.160821   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.160840   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:21.160847   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:21.160907   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:21.195519   64758 cri.go:89] found id: ""
	I0804 00:16:21.195547   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.195558   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:21.195566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:21.195629   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:21.236193   64758 cri.go:89] found id: ""
	I0804 00:16:21.236222   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.236232   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:21.236243   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:21.236258   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:21.295154   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:21.295198   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:21.309540   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:21.309566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:21.389391   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:21.389416   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:21.389433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:21.472771   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:21.472808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:17.611366   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.612092   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:17.827036   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.827655   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:21.828026   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:21.101899   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:23.601687   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.018923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:24.032954   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:24.033013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:24.073677   64758 cri.go:89] found id: ""
	I0804 00:16:24.073703   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.073711   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:24.073716   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:24.073777   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:24.115752   64758 cri.go:89] found id: ""
	I0804 00:16:24.115775   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.115785   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:24.115792   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:24.115849   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:24.152967   64758 cri.go:89] found id: ""
	I0804 00:16:24.153001   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.153017   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:24.153024   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:24.153098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:24.190557   64758 cri.go:89] found id: ""
	I0804 00:16:24.190581   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.190589   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:24.190595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:24.190643   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:24.229312   64758 cri.go:89] found id: ""
	I0804 00:16:24.229341   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.229351   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:24.229373   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:24.229437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:24.265076   64758 cri.go:89] found id: ""
	I0804 00:16:24.265100   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.265107   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:24.265113   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:24.265167   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:24.306508   64758 cri.go:89] found id: ""
	I0804 00:16:24.306534   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.306542   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:24.306547   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:24.306598   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:24.350714   64758 cri.go:89] found id: ""
	I0804 00:16:24.350747   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.350759   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:24.350770   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:24.350785   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:24.366188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:24.366216   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:24.438410   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:24.438431   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:24.438447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:24.522635   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:24.522669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:24.562647   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:24.562678   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:22.110420   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.111399   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.613839   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.327982   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.826914   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.099435   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:28.099896   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:30.100659   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:27.119437   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:27.133330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:27.133426   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:27.170001   64758 cri.go:89] found id: ""
	I0804 00:16:27.170039   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.170048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:27.170054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:27.170112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:27.205811   64758 cri.go:89] found id: ""
	I0804 00:16:27.205843   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.205854   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:27.205861   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:27.205922   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:27.247249   64758 cri.go:89] found id: ""
	I0804 00:16:27.247278   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.247287   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:27.247294   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:27.247360   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:27.285659   64758 cri.go:89] found id: ""
	I0804 00:16:27.285688   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.285697   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:27.285703   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:27.285774   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:27.321039   64758 cri.go:89] found id: ""
	I0804 00:16:27.321066   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.321075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:27.321084   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:27.321130   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:27.359947   64758 cri.go:89] found id: ""
	I0804 00:16:27.359977   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.359988   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:27.359996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:27.360056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:27.401408   64758 cri.go:89] found id: ""
	I0804 00:16:27.401432   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.401440   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:27.401449   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:27.401495   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:27.437297   64758 cri.go:89] found id: ""
	I0804 00:16:27.437326   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.437337   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:27.437347   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:27.437373   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:27.490594   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:27.490639   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:27.505993   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:27.506021   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:27.588779   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:27.588804   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:27.588820   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:27.681557   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:27.681592   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.225062   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:30.239475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:30.239540   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:30.283896   64758 cri.go:89] found id: ""
	I0804 00:16:30.283923   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.283931   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:30.283938   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:30.284013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:30.321506   64758 cri.go:89] found id: ""
	I0804 00:16:30.321532   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.321539   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:30.321545   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:30.321593   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:30.358314   64758 cri.go:89] found id: ""
	I0804 00:16:30.358340   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.358347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:30.358353   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:30.358400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:30.393561   64758 cri.go:89] found id: ""
	I0804 00:16:30.393587   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.393595   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:30.393600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:30.393646   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:30.429907   64758 cri.go:89] found id: ""
	I0804 00:16:30.429935   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.429943   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:30.429949   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:30.430008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:30.466305   64758 cri.go:89] found id: ""
	I0804 00:16:30.466332   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.466342   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:30.466350   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:30.466408   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:30.505384   64758 cri.go:89] found id: ""
	I0804 00:16:30.505413   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.505424   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:30.505431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:30.505492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:30.541756   64758 cri.go:89] found id: ""
	I0804 00:16:30.541786   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.541796   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:30.541806   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:30.541821   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:30.555516   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:30.555554   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:30.627442   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:30.627463   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:30.627473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:30.701452   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:30.701489   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.743436   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:30.743473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:29.111149   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:31.111470   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:29.327268   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:31.328424   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:32.605884   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:34.608119   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:33.298898   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:33.315211   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:33.315292   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:33.353171   64758 cri.go:89] found id: ""
	I0804 00:16:33.353207   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.353220   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:33.353229   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:33.353297   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:33.389767   64758 cri.go:89] found id: ""
	I0804 00:16:33.389792   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.389799   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:33.389805   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:33.389851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:33.446889   64758 cri.go:89] found id: ""
	I0804 00:16:33.446928   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.446939   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:33.446946   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:33.447004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:33.487340   64758 cri.go:89] found id: ""
	I0804 00:16:33.487362   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.487370   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:33.487376   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:33.487423   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:33.530398   64758 cri.go:89] found id: ""
	I0804 00:16:33.530421   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.530429   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:33.530435   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:33.530483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:33.568725   64758 cri.go:89] found id: ""
	I0804 00:16:33.568753   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.568762   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:33.568769   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:33.568818   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:33.607205   64758 cri.go:89] found id: ""
	I0804 00:16:33.607232   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.607242   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:33.607249   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:33.607311   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:33.648188   64758 cri.go:89] found id: ""
	I0804 00:16:33.648220   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.648230   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:33.648240   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:33.648256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:33.700231   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:33.700266   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:33.714899   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:33.714932   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:33.794306   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:33.794326   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:33.794340   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.872446   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:33.872482   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.415000   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:36.428920   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:36.428996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:36.464784   64758 cri.go:89] found id: ""
	I0804 00:16:36.464810   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.464817   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:36.464823   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:36.464925   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:36.501394   64758 cri.go:89] found id: ""
	I0804 00:16:36.501423   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.501431   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:36.501437   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:36.501497   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:36.537049   64758 cri.go:89] found id: ""
	I0804 00:16:36.537079   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.537090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:36.537102   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:36.537173   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:36.573956   64758 cri.go:89] found id: ""
	I0804 00:16:36.573986   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.573997   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:36.574004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:36.574065   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:36.612996   64758 cri.go:89] found id: ""
	I0804 00:16:36.613016   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.613023   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:36.613029   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:36.613083   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:36.652346   64758 cri.go:89] found id: ""
	I0804 00:16:36.652367   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.652374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:36.652380   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:36.652437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:36.690073   64758 cri.go:89] found id: ""
	I0804 00:16:36.690100   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.690110   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:36.690119   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:36.690182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:36.732436   64758 cri.go:89] found id: ""
	I0804 00:16:36.732466   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.732477   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:36.732487   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:36.732505   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:36.746036   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:36.746060   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:36.818141   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:36.818164   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:36.818179   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.611181   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:35.611691   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:33.329719   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:35.330172   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:37.100705   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:39.603600   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:36.907689   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:36.907732   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.947104   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:36.947135   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.502960   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:39.516340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:39.516414   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:39.555903   64758 cri.go:89] found id: ""
	I0804 00:16:39.555929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.555939   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:39.555946   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:39.556004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:39.599791   64758 cri.go:89] found id: ""
	I0804 00:16:39.599816   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.599827   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:39.599834   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:39.599894   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:39.642903   64758 cri.go:89] found id: ""
	I0804 00:16:39.642929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.642936   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:39.642944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:39.643004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:39.678667   64758 cri.go:89] found id: ""
	I0804 00:16:39.678693   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.678702   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:39.678709   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:39.678757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:39.716888   64758 cri.go:89] found id: ""
	I0804 00:16:39.716916   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.716926   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:39.716933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:39.717001   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:39.751576   64758 cri.go:89] found id: ""
	I0804 00:16:39.751602   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.751610   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:39.751616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:39.751664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:39.794026   64758 cri.go:89] found id: ""
	I0804 00:16:39.794056   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.794067   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:39.794087   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:39.794158   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:39.841426   64758 cri.go:89] found id: ""
	I0804 00:16:39.841454   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.841464   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:39.841474   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:39.841492   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.902579   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:39.902616   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:39.924467   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:39.924495   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:40.001318   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:40.001345   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:40.001377   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:40.081520   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:40.081552   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:38.111443   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:40.610810   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:37.827851   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:39.828752   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.327716   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.100037   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:44.100850   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.623094   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:42.636523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:42.636594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:42.674188   64758 cri.go:89] found id: ""
	I0804 00:16:42.674218   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.674226   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:42.674231   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:42.674277   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:42.708496   64758 cri.go:89] found id: ""
	I0804 00:16:42.708522   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.708532   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:42.708539   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:42.708601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:42.751050   64758 cri.go:89] found id: ""
	I0804 00:16:42.751087   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.751100   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:42.751107   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:42.751170   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:42.788520   64758 cri.go:89] found id: ""
	I0804 00:16:42.788546   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.788555   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:42.788560   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:42.788619   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:42.828273   64758 cri.go:89] found id: ""
	I0804 00:16:42.828297   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.828304   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:42.828309   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:42.828356   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:42.867754   64758 cri.go:89] found id: ""
	I0804 00:16:42.867784   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.867799   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:42.867807   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:42.867864   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:42.903945   64758 cri.go:89] found id: ""
	I0804 00:16:42.903977   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.903988   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:42.903996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:42.904059   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:42.942477   64758 cri.go:89] found id: ""
	I0804 00:16:42.942518   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.942539   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:42.942549   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:42.942565   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:42.981776   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:42.981810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:43.037601   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:43.037634   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:43.052719   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:43.052746   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:43.122664   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:43.122688   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:43.122702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:45.701275   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:45.714532   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:45.714607   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:45.750932   64758 cri.go:89] found id: ""
	I0804 00:16:45.750955   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.750986   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:45.750991   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:45.751042   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:45.787348   64758 cri.go:89] found id: ""
	I0804 00:16:45.787373   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.787381   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:45.787387   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:45.787441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:45.823390   64758 cri.go:89] found id: ""
	I0804 00:16:45.823419   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.823429   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:45.823436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:45.823498   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:45.861400   64758 cri.go:89] found id: ""
	I0804 00:16:45.861430   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.861440   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:45.861448   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:45.861508   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:45.898992   64758 cri.go:89] found id: ""
	I0804 00:16:45.899024   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.899036   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:45.899043   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:45.899110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:45.934542   64758 cri.go:89] found id: ""
	I0804 00:16:45.934570   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.934582   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:45.934589   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:45.934648   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:45.967908   64758 cri.go:89] found id: ""
	I0804 00:16:45.967938   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.967949   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:45.967957   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:45.968018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:46.006475   64758 cri.go:89] found id: ""
	I0804 00:16:46.006504   64758 logs.go:276] 0 containers: []
	W0804 00:16:46.006516   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:46.006526   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:46.006541   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:46.058760   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:46.058793   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:46.074753   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:46.074777   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:46.149634   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:46.149655   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:46.149671   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:46.230104   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:46.230140   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:43.111492   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:45.611224   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:44.827683   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:47.326999   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:46.600307   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:49.100532   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:48.772224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:48.785848   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:48.785935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.825206   64758 cri.go:89] found id: ""
	I0804 00:16:48.825232   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.825242   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:48.825249   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:48.825315   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:48.861559   64758 cri.go:89] found id: ""
	I0804 00:16:48.861588   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.861599   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:48.861607   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:48.861675   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:48.903375   64758 cri.go:89] found id: ""
	I0804 00:16:48.903401   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.903412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:48.903419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:48.903480   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:48.940708   64758 cri.go:89] found id: ""
	I0804 00:16:48.940736   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.940748   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:48.940755   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:48.940817   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:48.976190   64758 cri.go:89] found id: ""
	I0804 00:16:48.976218   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.976228   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:48.976236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:48.976291   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:49.010393   64758 cri.go:89] found id: ""
	I0804 00:16:49.010423   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.010434   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:49.010442   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:49.010506   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:49.046670   64758 cri.go:89] found id: ""
	I0804 00:16:49.046698   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.046707   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:49.046711   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:49.046759   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:49.085254   64758 cri.go:89] found id: ""
	I0804 00:16:49.085284   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.085293   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:49.085302   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:49.085314   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:49.142402   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:49.142433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:49.157063   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:49.157092   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:49.233808   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:49.233829   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:49.233841   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:49.320355   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:49.320395   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:51.862548   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:51.875679   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:51.875750   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.110954   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:50.111867   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:49.327109   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.327920   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.600258   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:53.601052   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.911400   64758 cri.go:89] found id: ""
	I0804 00:16:51.911427   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.911437   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:51.911444   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:51.911505   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:51.948825   64758 cri.go:89] found id: ""
	I0804 00:16:51.948853   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.948863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:51.948870   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:51.948935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:51.989458   64758 cri.go:89] found id: ""
	I0804 00:16:51.989488   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.989499   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:51.989506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:51.989568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:52.026663   64758 cri.go:89] found id: ""
	I0804 00:16:52.026685   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.026693   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:52.026698   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:52.026754   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:52.066089   64758 cri.go:89] found id: ""
	I0804 00:16:52.066115   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.066127   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:52.066135   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:52.066198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:52.102159   64758 cri.go:89] found id: ""
	I0804 00:16:52.102185   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.102196   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:52.102203   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:52.102258   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:52.144239   64758 cri.go:89] found id: ""
	I0804 00:16:52.144266   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.144276   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:52.144283   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:52.144344   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:52.180679   64758 cri.go:89] found id: ""
	I0804 00:16:52.180708   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.180717   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:52.180725   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:52.180738   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:52.262074   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:52.262116   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.305913   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:52.305948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:52.357044   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:52.357081   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:52.372090   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:52.372119   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:52.444148   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:54.944910   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:54.958182   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:54.958239   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:54.993629   64758 cri.go:89] found id: ""
	I0804 00:16:54.993657   64758 logs.go:276] 0 containers: []
	W0804 00:16:54.993668   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:54.993675   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:54.993734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:55.029270   64758 cri.go:89] found id: ""
	I0804 00:16:55.029299   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.029310   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:55.029317   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:55.029393   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:55.067923   64758 cri.go:89] found id: ""
	I0804 00:16:55.067951   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.067961   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:55.067968   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:55.068027   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:55.107533   64758 cri.go:89] found id: ""
	I0804 00:16:55.107556   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.107565   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:55.107572   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:55.107633   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:55.143828   64758 cri.go:89] found id: ""
	I0804 00:16:55.143856   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.143868   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:55.143875   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:55.143940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:55.177960   64758 cri.go:89] found id: ""
	I0804 00:16:55.178015   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.178030   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:55.178038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:55.178112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:55.217457   64758 cri.go:89] found id: ""
	I0804 00:16:55.217481   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.217488   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:55.217494   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:55.217538   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:55.259862   64758 cri.go:89] found id: ""
	I0804 00:16:55.259890   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.259898   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:55.259907   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:55.259918   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:55.311566   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:55.311598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:55.327833   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:55.327866   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:55.406475   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:55.406495   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:55.406511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:55.484586   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:55.484618   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.610982   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:54.611276   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:56.611515   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:53.827394   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:55.827945   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:56.099238   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.100223   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:00.599870   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.028251   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:58.042169   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:58.042236   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:58.076836   64758 cri.go:89] found id: ""
	I0804 00:16:58.076859   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.076868   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:58.076873   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:58.076937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:58.115989   64758 cri.go:89] found id: ""
	I0804 00:16:58.116019   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.116031   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:58.116037   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:58.116099   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:58.155049   64758 cri.go:89] found id: ""
	I0804 00:16:58.155079   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.155090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:58.155097   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:58.155160   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:58.190257   64758 cri.go:89] found id: ""
	I0804 00:16:58.190293   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.190305   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:58.190315   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:58.190370   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:58.225001   64758 cri.go:89] found id: ""
	I0804 00:16:58.225029   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.225038   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:58.225061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:58.225118   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:58.268881   64758 cri.go:89] found id: ""
	I0804 00:16:58.268925   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.268937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:58.268945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:58.269010   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:58.305223   64758 cri.go:89] found id: ""
	I0804 00:16:58.305253   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.305269   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:58.305277   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:58.305340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:58.340517   64758 cri.go:89] found id: ""
	I0804 00:16:58.340548   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.340559   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:58.340570   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:58.340584   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:58.355372   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:58.355403   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:58.426292   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:58.426312   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:58.426326   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:58.509990   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:58.510034   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.550957   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:58.550988   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.104806   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:01.119379   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:01.119453   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:01.158376   64758 cri.go:89] found id: ""
	I0804 00:17:01.158407   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.158419   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:01.158426   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:01.158484   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:01.193826   64758 cri.go:89] found id: ""
	I0804 00:17:01.193858   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.193869   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:01.193876   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:01.193937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:01.228566   64758 cri.go:89] found id: ""
	I0804 00:17:01.228588   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.228600   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:01.228607   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:01.228667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:01.265736   64758 cri.go:89] found id: ""
	I0804 00:17:01.265762   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.265772   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:01.265778   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:01.265834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:01.302655   64758 cri.go:89] found id: ""
	I0804 00:17:01.302679   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.302694   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:01.302699   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:01.302753   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:01.340191   64758 cri.go:89] found id: ""
	I0804 00:17:01.340218   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.340226   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:01.340236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:01.340294   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:01.375767   64758 cri.go:89] found id: ""
	I0804 00:17:01.375789   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.375797   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:01.375802   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:01.375875   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:01.412446   64758 cri.go:89] found id: ""
	I0804 00:17:01.412479   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.412490   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:01.412502   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:01.412518   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.466271   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:01.466309   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:01.480800   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:01.480838   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:01.547909   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:01.547932   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:01.547948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:01.628318   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:01.628351   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.611854   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:01.111626   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.326831   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:00.327154   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:02.328038   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:02.601960   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:05.099489   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:04.175883   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:04.189038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:04.189098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:04.229126   64758 cri.go:89] found id: ""
	I0804 00:17:04.229158   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.229167   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:04.229174   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:04.229235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:04.264107   64758 cri.go:89] found id: ""
	I0804 00:17:04.264134   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.264142   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:04.264147   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:04.264203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:04.299959   64758 cri.go:89] found id: ""
	I0804 00:17:04.299996   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.300004   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:04.300010   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:04.300056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:04.337978   64758 cri.go:89] found id: ""
	I0804 00:17:04.338006   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.338016   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:04.338023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:04.338081   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:04.377969   64758 cri.go:89] found id: ""
	I0804 00:17:04.377993   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.378001   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:04.378006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:04.378068   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:04.413036   64758 cri.go:89] found id: ""
	I0804 00:17:04.413062   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.413071   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:04.413078   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:04.413140   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:04.450387   64758 cri.go:89] found id: ""
	I0804 00:17:04.450417   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.450426   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:04.450431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:04.450488   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:04.490132   64758 cri.go:89] found id: ""
	I0804 00:17:04.490165   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.490177   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:04.490188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:04.490204   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:04.560633   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:04.560653   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:04.560668   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:04.639409   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:04.639445   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:04.682479   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:04.682512   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:04.734823   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:04.734857   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:03.112357   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:05.610907   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:04.828050   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.327249   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.099893   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:09.100093   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.250174   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:07.263523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:07.263599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:07.300095   64758 cri.go:89] found id: ""
	I0804 00:17:07.300124   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.300136   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:07.300144   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:07.300211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:07.337798   64758 cri.go:89] found id: ""
	I0804 00:17:07.337824   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.337846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:07.337851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:07.337902   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:07.375305   64758 cri.go:89] found id: ""
	I0804 00:17:07.375337   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.375348   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:07.375356   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:07.375406   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:07.411603   64758 cri.go:89] found id: ""
	I0804 00:17:07.411629   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.411639   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:07.411646   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:07.411704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:07.450478   64758 cri.go:89] found id: ""
	I0804 00:17:07.450502   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.450511   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:07.450518   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:07.450564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:07.489972   64758 cri.go:89] found id: ""
	I0804 00:17:07.489997   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.490006   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:07.490012   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:07.490073   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:07.523685   64758 cri.go:89] found id: ""
	I0804 00:17:07.523713   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.523725   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:07.523732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:07.523789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:07.562636   64758 cri.go:89] found id: ""
	I0804 00:17:07.562665   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.562675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:07.562686   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:07.562702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:07.647968   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:07.648004   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.689829   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:07.689856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:07.738333   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:07.738366   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:07.753419   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:07.753448   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:07.829678   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.329981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:10.343676   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:10.343743   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:10.379546   64758 cri.go:89] found id: ""
	I0804 00:17:10.379575   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.379586   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:10.379594   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:10.379657   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:10.416247   64758 cri.go:89] found id: ""
	I0804 00:17:10.416271   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.416279   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:10.416284   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:10.416340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:10.455261   64758 cri.go:89] found id: ""
	I0804 00:17:10.455291   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.455303   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:10.455310   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:10.455373   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:10.493220   64758 cri.go:89] found id: ""
	I0804 00:17:10.493251   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.493262   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:10.493270   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:10.493329   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:10.538682   64758 cri.go:89] found id: ""
	I0804 00:17:10.538709   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.538720   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:10.538727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:10.538787   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:10.575509   64758 cri.go:89] found id: ""
	I0804 00:17:10.575535   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.575546   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:10.575553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:10.575609   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:10.613163   64758 cri.go:89] found id: ""
	I0804 00:17:10.613188   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.613196   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:10.613201   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:10.613260   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:10.648914   64758 cri.go:89] found id: ""
	I0804 00:17:10.648940   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.648947   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:10.648956   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:10.648968   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:10.700151   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:10.700187   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:10.714971   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:10.714998   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:10.787679   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.787698   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:10.787710   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:10.865008   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:10.865048   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.611770   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:10.110299   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:09.327569   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:11.327855   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:11.603427   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:14.100524   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:13.406150   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:13.419602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:13.419659   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:13.456823   64758 cri.go:89] found id: ""
	I0804 00:17:13.456852   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.456863   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:13.456870   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:13.456935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:13.493527   64758 cri.go:89] found id: ""
	I0804 00:17:13.493556   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.493567   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:13.493574   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:13.493697   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:13.529745   64758 cri.go:89] found id: ""
	I0804 00:17:13.529770   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.529784   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:13.529790   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:13.529856   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:13.567775   64758 cri.go:89] found id: ""
	I0804 00:17:13.567811   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.567819   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:13.567824   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:13.567888   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:13.604638   64758 cri.go:89] found id: ""
	I0804 00:17:13.604670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.604678   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:13.604685   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:13.604741   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:13.646638   64758 cri.go:89] found id: ""
	I0804 00:17:13.646670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.646679   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:13.646684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:13.646730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:13.694656   64758 cri.go:89] found id: ""
	I0804 00:17:13.694682   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.694693   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:13.694701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:13.694761   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:13.733738   64758 cri.go:89] found id: ""
	I0804 00:17:13.733762   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.733771   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:13.733780   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:13.733792   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:13.749747   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:13.749775   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:13.832826   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:13.832852   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:13.832868   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:13.914198   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:13.914233   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:13.952753   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:13.952787   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.503600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:16.516932   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:16.517004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:16.552012   64758 cri.go:89] found id: ""
	I0804 00:17:16.552037   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.552046   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:16.552052   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:16.552110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:16.590626   64758 cri.go:89] found id: ""
	I0804 00:17:16.590653   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.590660   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:16.590666   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:16.590732   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:16.628684   64758 cri.go:89] found id: ""
	I0804 00:17:16.628712   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.628723   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:16.628729   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:16.628792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:16.664934   64758 cri.go:89] found id: ""
	I0804 00:17:16.664969   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.664980   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:16.664987   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:16.665054   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:16.700098   64758 cri.go:89] found id: ""
	I0804 00:17:16.700127   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.700138   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:16.700144   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:16.700214   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:16.736761   64758 cri.go:89] found id: ""
	I0804 00:17:16.736786   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.736795   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:16.736800   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:16.736863   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:16.780010   64758 cri.go:89] found id: ""
	I0804 00:17:16.780033   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.780045   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:16.780050   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:16.780106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:16.816079   64758 cri.go:89] found id: ""
	I0804 00:17:16.816103   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.816112   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:16.816122   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:16.816136   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.866526   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:16.866560   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:16.881254   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:16.881287   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:17:12.610907   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:14.610978   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.611860   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:13.827860   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.327167   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.601482   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:19.100152   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	W0804 00:17:16.952491   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:16.952515   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:16.952530   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:17.038943   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:17.038977   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:19.580078   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:19.595538   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:19.595601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:19.632206   64758 cri.go:89] found id: ""
	I0804 00:17:19.632234   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.632245   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:19.632252   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:19.632307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:19.670335   64758 cri.go:89] found id: ""
	I0804 00:17:19.670362   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.670377   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:19.670388   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:19.670447   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:19.707772   64758 cri.go:89] found id: ""
	I0804 00:17:19.707801   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.707812   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:19.707818   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:19.707877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:19.743822   64758 cri.go:89] found id: ""
	I0804 00:17:19.743855   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.743867   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:19.743874   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:19.743930   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:19.781592   64758 cri.go:89] found id: ""
	I0804 00:17:19.781622   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.781632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:19.781640   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:19.781698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:19.818792   64758 cri.go:89] found id: ""
	I0804 00:17:19.818815   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.818823   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:19.818829   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:19.818877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:19.856486   64758 cri.go:89] found id: ""
	I0804 00:17:19.856511   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.856522   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:19.856528   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:19.856586   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:19.901721   64758 cri.go:89] found id: ""
	I0804 00:17:19.901743   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.901754   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:19.901764   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:19.901780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:19.980095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:19.980119   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:19.980134   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:20.072699   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:20.072750   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:20.159007   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:20.159038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:20.211785   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:20.211818   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:19.110218   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:21.110572   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:18.828527   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:20.828554   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:21.600968   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:23.602526   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.603220   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:22.727235   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:22.740922   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:22.740996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:22.780356   64758 cri.go:89] found id: ""
	I0804 00:17:22.780381   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.780392   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:22.780400   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:22.780459   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:22.817075   64758 cri.go:89] found id: ""
	I0804 00:17:22.817100   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.817111   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:22.817119   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:22.817182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:22.857213   64758 cri.go:89] found id: ""
	I0804 00:17:22.857243   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.857253   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:22.857260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:22.857325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:22.894049   64758 cri.go:89] found id: ""
	I0804 00:17:22.894085   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.894096   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:22.894104   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:22.894171   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:22.929718   64758 cri.go:89] found id: ""
	I0804 00:17:22.929746   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.929756   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:22.929770   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:22.929843   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:22.964863   64758 cri.go:89] found id: ""
	I0804 00:17:22.964892   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.964901   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:22.964907   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:22.964958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:23.002565   64758 cri.go:89] found id: ""
	I0804 00:17:23.002593   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.002603   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:23.002611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:23.002676   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:23.038161   64758 cri.go:89] found id: ""
	I0804 00:17:23.038188   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.038199   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:23.038211   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:23.038224   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:23.091865   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:23.091903   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:23.108358   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:23.108388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:23.186417   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:23.186438   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:23.186453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.269119   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:23.269161   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:25.812405   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:25.833174   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:25.833253   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:25.881654   64758 cri.go:89] found id: ""
	I0804 00:17:25.881681   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.881690   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:25.881696   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:25.881757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:25.936968   64758 cri.go:89] found id: ""
	I0804 00:17:25.936997   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.937006   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:25.937011   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:25.937066   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:25.972437   64758 cri.go:89] found id: ""
	I0804 00:17:25.972462   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.972470   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:25.972475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:25.972529   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:26.008306   64758 cri.go:89] found id: ""
	I0804 00:17:26.008346   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.008357   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:26.008366   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:26.008435   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:26.045593   64758 cri.go:89] found id: ""
	I0804 00:17:26.045620   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.045632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:26.045639   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:26.045696   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:26.084170   64758 cri.go:89] found id: ""
	I0804 00:17:26.084195   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.084205   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:26.084212   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:26.084272   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:26.122524   64758 cri.go:89] found id: ""
	I0804 00:17:26.122551   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.122559   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:26.122565   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:26.122623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:26.159264   64758 cri.go:89] found id: ""
	I0804 00:17:26.159297   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.159308   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:26.159320   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:26.159337   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:26.205692   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:26.205718   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:26.257286   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:26.257321   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:26.271582   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:26.271611   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:26.344562   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:26.344586   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:26.344598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.112800   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.610507   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:23.327294   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.828519   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:28.100160   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:30.100351   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:28.929410   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:28.943941   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:28.944003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:28.986127   64758 cri.go:89] found id: ""
	I0804 00:17:28.986157   64758 logs.go:276] 0 containers: []
	W0804 00:17:28.986169   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:28.986176   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:28.986237   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:29.025528   64758 cri.go:89] found id: ""
	I0804 00:17:29.025556   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.025564   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:29.025570   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:29.025624   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:29.059525   64758 cri.go:89] found id: ""
	I0804 00:17:29.059553   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.059561   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:29.059566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:29.059614   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:29.097451   64758 cri.go:89] found id: ""
	I0804 00:17:29.097489   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.097499   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:29.097506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:29.097564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:29.135504   64758 cri.go:89] found id: ""
	I0804 00:17:29.135532   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.135540   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:29.135546   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:29.135601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:29.175277   64758 cri.go:89] found id: ""
	I0804 00:17:29.175314   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.175324   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:29.175332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:29.175391   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:29.210275   64758 cri.go:89] found id: ""
	I0804 00:17:29.210303   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.210314   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:29.210321   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:29.210382   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:29.246138   64758 cri.go:89] found id: ""
	I0804 00:17:29.246174   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.246186   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:29.246196   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:29.246213   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:29.298935   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:29.298971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:29.313342   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:29.313388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:29.384609   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:29.384635   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:29.384650   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:29.461759   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:29.461795   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:27.611021   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:29.612149   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:27.831367   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:30.327878   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.328772   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.101073   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.600832   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.010152   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:32.023609   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:32.023677   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:32.062480   64758 cri.go:89] found id: ""
	I0804 00:17:32.062508   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.062517   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:32.062523   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:32.062590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:32.099601   64758 cri.go:89] found id: ""
	I0804 00:17:32.099627   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.099634   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:32.099640   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:32.099691   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:32.138651   64758 cri.go:89] found id: ""
	I0804 00:17:32.138680   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.138689   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:32.138694   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:32.138751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:32.182224   64758 cri.go:89] found id: ""
	I0804 00:17:32.182249   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.182257   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:32.182264   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:32.182318   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:32.224381   64758 cri.go:89] found id: ""
	I0804 00:17:32.224410   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.224421   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:32.224429   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:32.224486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:32.261569   64758 cri.go:89] found id: ""
	I0804 00:17:32.261600   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.261609   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:32.261615   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:32.261663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:32.304769   64758 cri.go:89] found id: ""
	I0804 00:17:32.304793   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.304807   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:32.304814   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:32.304867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:32.348695   64758 cri.go:89] found id: ""
	I0804 00:17:32.348727   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.348736   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:32.348745   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:32.348757   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.389444   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:32.389473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:32.442901   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:32.442938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:32.457562   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:32.457588   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:32.529121   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:32.529144   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:32.529160   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.114712   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:35.129725   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:35.129795   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:35.167226   64758 cri.go:89] found id: ""
	I0804 00:17:35.167248   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.167257   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:35.167262   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:35.167310   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:35.200889   64758 cri.go:89] found id: ""
	I0804 00:17:35.200914   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.200922   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:35.200927   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:35.201000   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:35.234899   64758 cri.go:89] found id: ""
	I0804 00:17:35.234927   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.234938   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:35.234945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:35.235003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:35.271355   64758 cri.go:89] found id: ""
	I0804 00:17:35.271393   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.271405   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:35.271412   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:35.271471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:35.313557   64758 cri.go:89] found id: ""
	I0804 00:17:35.313585   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.313595   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:35.313602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:35.313663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:35.352931   64758 cri.go:89] found id: ""
	I0804 00:17:35.352960   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.352971   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:35.352979   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:35.353046   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:35.391202   64758 cri.go:89] found id: ""
	I0804 00:17:35.391232   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.391256   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:35.391263   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:35.391337   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:35.427599   64758 cri.go:89] found id: ""
	I0804 00:17:35.427627   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.427638   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:35.427649   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:35.427666   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:35.482025   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:35.482061   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:35.498274   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:35.498303   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:35.572606   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:35.572631   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:35.572644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.655534   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:35.655566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.114835   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.610785   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.827077   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:36.827108   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:36.601588   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:38.602210   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:40.602295   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:38.205756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:38.218974   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:38.219044   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:38.253798   64758 cri.go:89] found id: ""
	I0804 00:17:38.253827   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.253839   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:38.253852   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:38.253911   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:38.291074   64758 cri.go:89] found id: ""
	I0804 00:17:38.291102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.291113   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:38.291120   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:38.291182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:38.332097   64758 cri.go:89] found id: ""
	I0804 00:17:38.332123   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.332133   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:38.332140   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:38.332198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:38.370074   64758 cri.go:89] found id: ""
	I0804 00:17:38.370102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.370110   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:38.370117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:38.370176   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:38.406962   64758 cri.go:89] found id: ""
	I0804 00:17:38.406984   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.406993   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:38.406998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:38.407051   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:38.447532   64758 cri.go:89] found id: ""
	I0804 00:17:38.447562   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.447572   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:38.447579   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:38.447653   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:38.484326   64758 cri.go:89] found id: ""
	I0804 00:17:38.484356   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.484368   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:38.484375   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:38.484444   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:38.521831   64758 cri.go:89] found id: ""
	I0804 00:17:38.521858   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.521869   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:38.521880   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:38.521893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:38.570540   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:38.570569   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:38.624921   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:38.624953   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:38.639451   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:38.639477   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:38.714435   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:38.714459   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:38.714475   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:41.295160   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:41.310032   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:41.310108   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:41.350363   64758 cri.go:89] found id: ""
	I0804 00:17:41.350393   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.350404   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:41.350412   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:41.350475   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:41.391662   64758 cri.go:89] found id: ""
	I0804 00:17:41.391691   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.391698   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:41.391703   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:41.391760   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:41.429653   64758 cri.go:89] found id: ""
	I0804 00:17:41.429678   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.429686   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:41.429692   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:41.429739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:41.469456   64758 cri.go:89] found id: ""
	I0804 00:17:41.469483   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.469494   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:41.469505   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:41.469566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:41.506124   64758 cri.go:89] found id: ""
	I0804 00:17:41.506154   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.506164   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:41.506171   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:41.506234   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:41.543139   64758 cri.go:89] found id: ""
	I0804 00:17:41.543171   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.543182   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:41.543190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:41.543252   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:41.580537   64758 cri.go:89] found id: ""
	I0804 00:17:41.580568   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.580578   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:41.580585   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:41.580652   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:41.619828   64758 cri.go:89] found id: ""
	I0804 00:17:41.619854   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.619862   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:41.619869   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:41.619882   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:41.660749   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:41.660780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:41.712889   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:41.712924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:41.726422   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:41.726447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:41.805673   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:41.805697   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:41.805712   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:37.110193   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:39.110927   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:41.111203   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:39.327800   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:41.327910   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:43.099815   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:45.101262   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:44.386563   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:44.399891   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:44.399954   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:44.434270   64758 cri.go:89] found id: ""
	I0804 00:17:44.434297   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.434305   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:44.434311   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:44.434372   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:44.469423   64758 cri.go:89] found id: ""
	I0804 00:17:44.469454   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.469463   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:44.469468   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:44.469535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:44.505511   64758 cri.go:89] found id: ""
	I0804 00:17:44.505539   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.505547   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:44.505553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:44.505602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:44.540897   64758 cri.go:89] found id: ""
	I0804 00:17:44.540922   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.540932   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:44.540937   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:44.540996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:44.578722   64758 cri.go:89] found id: ""
	I0804 00:17:44.578747   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.578755   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:44.578760   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:44.578812   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:44.615838   64758 cri.go:89] found id: ""
	I0804 00:17:44.615863   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.615874   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:44.615881   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:44.615940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:44.657695   64758 cri.go:89] found id: ""
	I0804 00:17:44.657724   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.657734   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:44.657741   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:44.657916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:44.695852   64758 cri.go:89] found id: ""
	I0804 00:17:44.695882   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.695892   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:44.695901   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:44.695912   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:44.754643   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:44.754687   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:44.773964   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:44.773994   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:44.857544   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:44.857567   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:44.857583   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:44.952987   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:44.953027   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:43.610772   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:45.611480   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:43.827218   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:46.327323   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:47.600755   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.099574   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:47.504957   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:47.520153   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:47.520232   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:47.557303   64758 cri.go:89] found id: ""
	I0804 00:17:47.557326   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.557334   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:47.557339   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:47.557410   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:47.595626   64758 cri.go:89] found id: ""
	I0804 00:17:47.595655   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.595665   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:47.595675   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:47.595733   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:47.633430   64758 cri.go:89] found id: ""
	I0804 00:17:47.633458   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.633466   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:47.633472   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:47.633525   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:47.670116   64758 cri.go:89] found id: ""
	I0804 00:17:47.670140   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.670149   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:47.670154   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:47.670200   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:47.709019   64758 cri.go:89] found id: ""
	I0804 00:17:47.709042   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.709050   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:47.709055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:47.709111   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:47.745230   64758 cri.go:89] found id: ""
	I0804 00:17:47.745251   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.745259   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:47.745265   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:47.745319   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:47.787957   64758 cri.go:89] found id: ""
	I0804 00:17:47.787985   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.787996   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:47.788004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:47.788063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:47.821451   64758 cri.go:89] found id: ""
	I0804 00:17:47.821477   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.821488   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:47.821498   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:47.821516   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:47.903035   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:47.903139   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:47.903162   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:47.986659   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:47.986702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:48.037921   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:48.037951   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:48.095354   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:48.095389   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:50.613264   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:50.627717   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:50.627792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:50.669311   64758 cri.go:89] found id: ""
	I0804 00:17:50.669338   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.669347   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:50.669370   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:50.669438   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:50.714674   64758 cri.go:89] found id: ""
	I0804 00:17:50.714704   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.714713   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:50.714718   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:50.714769   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:50.755291   64758 cri.go:89] found id: ""
	I0804 00:17:50.755318   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.755326   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:50.755332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:50.755394   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:50.801927   64758 cri.go:89] found id: ""
	I0804 00:17:50.801955   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.801964   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:50.801970   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:50.802020   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:50.845096   64758 cri.go:89] found id: ""
	I0804 00:17:50.845121   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.845130   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:50.845136   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:50.845193   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:50.882664   64758 cri.go:89] found id: ""
	I0804 00:17:50.882694   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.882705   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:50.882712   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:50.882771   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:50.921233   64758 cri.go:89] found id: ""
	I0804 00:17:50.921260   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.921268   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:50.921273   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:50.921326   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:50.955254   64758 cri.go:89] found id: ""
	I0804 00:17:50.955286   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.955298   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:50.955311   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:50.955329   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:51.010001   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:51.010037   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:51.024943   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:51.024966   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:51.096095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:51.096123   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:51.096139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:51.177829   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:51.177864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:47.611778   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.110408   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:48.328693   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.828022   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:52.609609   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:55.100616   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:53.720665   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:53.736318   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:53.736380   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:53.772887   64758 cri.go:89] found id: ""
	I0804 00:17:53.772916   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.772926   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:53.772934   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:53.772995   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:53.811771   64758 cri.go:89] found id: ""
	I0804 00:17:53.811797   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.811837   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:53.811845   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:53.811906   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:53.846684   64758 cri.go:89] found id: ""
	I0804 00:17:53.846716   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.846726   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:53.846736   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:53.846798   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:53.883550   64758 cri.go:89] found id: ""
	I0804 00:17:53.883581   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.883592   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:53.883600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:53.883662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:53.921031   64758 cri.go:89] found id: ""
	I0804 00:17:53.921061   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.921072   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:53.921080   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:53.921153   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:53.960338   64758 cri.go:89] found id: ""
	I0804 00:17:53.960364   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.960374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:53.960381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:53.960441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:53.998404   64758 cri.go:89] found id: ""
	I0804 00:17:53.998434   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.998450   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:53.998458   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:53.998520   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:54.033417   64758 cri.go:89] found id: ""
	I0804 00:17:54.033444   64758 logs.go:276] 0 containers: []
	W0804 00:17:54.033453   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:54.033461   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:54.033473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:54.071945   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:54.071971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:54.124614   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:54.124644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:54.140757   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:54.140783   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:54.241735   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:54.241754   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:54.241769   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:56.821591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:56.836569   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:56.836631   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:56.872013   64758 cri.go:89] found id: ""
	I0804 00:17:56.872039   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.872048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:56.872054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:56.872110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:52.612077   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:55.111566   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:52.828335   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:54.830625   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:56.831382   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:57.101663   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.600253   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:56.908022   64758 cri.go:89] found id: ""
	I0804 00:17:56.908051   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.908061   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:56.908067   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:56.908114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:56.943309   64758 cri.go:89] found id: ""
	I0804 00:17:56.943336   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.943347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:56.943359   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:56.943415   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:56.977799   64758 cri.go:89] found id: ""
	I0804 00:17:56.977839   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.977847   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:56.977853   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:56.977916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:57.015185   64758 cri.go:89] found id: ""
	I0804 00:17:57.015213   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.015223   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:57.015237   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:57.015295   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:57.051856   64758 cri.go:89] found id: ""
	I0804 00:17:57.051879   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.051887   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:57.051893   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:57.051944   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:57.086349   64758 cri.go:89] found id: ""
	I0804 00:17:57.086376   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.086387   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:57.086393   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:57.086439   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:57.125005   64758 cri.go:89] found id: ""
	I0804 00:17:57.125048   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.125064   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:57.125076   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:57.125090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:57.200348   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:57.200382   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:57.240899   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:57.240924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.294331   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:57.294375   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:57.308388   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:57.308429   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:57.382602   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:59.883070   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:59.897055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:59.897116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:59.932983   64758 cri.go:89] found id: ""
	I0804 00:17:59.933012   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.933021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:59.933029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:59.933088   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:59.971781   64758 cri.go:89] found id: ""
	I0804 00:17:59.971807   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.971815   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:59.971820   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:59.971878   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:00.008381   64758 cri.go:89] found id: ""
	I0804 00:18:00.008406   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.008414   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:00.008419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:00.008483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:00.053257   64758 cri.go:89] found id: ""
	I0804 00:18:00.053281   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.053290   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:00.053295   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:00.053342   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:00.089891   64758 cri.go:89] found id: ""
	I0804 00:18:00.089925   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.089936   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:00.089943   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:00.090008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:00.129833   64758 cri.go:89] found id: ""
	I0804 00:18:00.129863   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.129875   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:00.129884   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:00.129942   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:00.181324   64758 cri.go:89] found id: ""
	I0804 00:18:00.181390   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.181403   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:00.181410   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:00.181471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:00.224426   64758 cri.go:89] found id: ""
	I0804 00:18:00.224451   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.224459   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:00.224467   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:00.224481   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:00.240122   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:00.240155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:00.317324   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:00.317346   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:00.317379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:00.398917   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:00.398952   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:00.440730   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:00.440758   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.111741   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.611509   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.327597   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:01.328678   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:02.099384   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:04.100512   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:02.992128   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:03.006787   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:03.006870   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:03.041291   64758 cri.go:89] found id: ""
	I0804 00:18:03.041321   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.041332   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:03.041341   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:03.041427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:03.077822   64758 cri.go:89] found id: ""
	I0804 00:18:03.077851   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.077863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:03.077871   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:03.077928   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:03.116579   64758 cri.go:89] found id: ""
	I0804 00:18:03.116603   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.116611   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:03.116616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:03.116662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:03.154904   64758 cri.go:89] found id: ""
	I0804 00:18:03.154931   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.154942   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:03.154950   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:03.155018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:03.190300   64758 cri.go:89] found id: ""
	I0804 00:18:03.190328   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.190341   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:03.190349   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:03.190413   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:03.225975   64758 cri.go:89] found id: ""
	I0804 00:18:03.226004   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.226016   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:03.226023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:03.226087   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:03.271499   64758 cri.go:89] found id: ""
	I0804 00:18:03.271525   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.271535   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:03.271543   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:03.271602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:03.308643   64758 cri.go:89] found id: ""
	I0804 00:18:03.308668   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.308675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:03.308684   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:03.308698   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:03.324528   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:03.324562   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:03.401102   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:03.401125   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:03.401139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:03.481817   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:03.481864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:03.522568   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:03.522601   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.074678   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:06.089765   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:06.089844   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:06.128372   64758 cri.go:89] found id: ""
	I0804 00:18:06.128400   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.128411   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:06.128419   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:06.128467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:06.169488   64758 cri.go:89] found id: ""
	I0804 00:18:06.169515   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.169525   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:06.169532   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:06.169590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:06.207969   64758 cri.go:89] found id: ""
	I0804 00:18:06.207998   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.208009   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:06.208015   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:06.208067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:06.244497   64758 cri.go:89] found id: ""
	I0804 00:18:06.244521   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.244529   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:06.244535   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:06.244592   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:06.282905   64758 cri.go:89] found id: ""
	I0804 00:18:06.282935   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.282945   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:06.282952   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:06.283013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:06.322498   64758 cri.go:89] found id: ""
	I0804 00:18:06.322523   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.322530   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:06.322537   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:06.322583   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:06.361377   64758 cri.go:89] found id: ""
	I0804 00:18:06.361402   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.361412   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:06.361420   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:06.361485   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:06.402082   64758 cri.go:89] found id: ""
	I0804 00:18:06.402112   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.402120   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:06.402128   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:06.402141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.452052   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:06.452089   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:06.466695   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:06.466734   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:06.546115   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:06.546140   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:06.546155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:06.639671   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:06.639708   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:02.111360   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:04.610774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:06.612557   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:03.330392   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:05.828925   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:06.603713   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:09.100060   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:09.193473   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:09.207696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:09.207755   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:09.247757   64758 cri.go:89] found id: ""
	I0804 00:18:09.247784   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.247795   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:09.247802   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:09.247867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:09.285516   64758 cri.go:89] found id: ""
	I0804 00:18:09.285549   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.285559   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:09.285567   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:09.285628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:09.321689   64758 cri.go:89] found id: ""
	I0804 00:18:09.321715   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.321725   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:09.321732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:09.321789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:09.358135   64758 cri.go:89] found id: ""
	I0804 00:18:09.358158   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.358166   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:09.358176   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:09.358223   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:09.393642   64758 cri.go:89] found id: ""
	I0804 00:18:09.393667   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.393675   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:09.393681   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:09.393730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:09.430651   64758 cri.go:89] found id: ""
	I0804 00:18:09.430674   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.430683   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:09.430689   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:09.430734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:09.472433   64758 cri.go:89] found id: ""
	I0804 00:18:09.472460   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.472469   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:09.472474   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:09.472533   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:09.511147   64758 cri.go:89] found id: ""
	I0804 00:18:09.511171   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.511179   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:09.511187   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:09.511207   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:09.560099   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:09.560142   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:09.574609   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:09.574641   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:09.646863   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:09.646891   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:09.646906   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:09.727309   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:09.727352   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:09.111726   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:11.611445   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:08.329278   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:10.827361   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:11.600326   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:14.099811   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:12.268925   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:12.284737   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:12.284813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:12.326015   64758 cri.go:89] found id: ""
	I0804 00:18:12.326036   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.326044   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:12.326049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:12.326095   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:12.374096   64758 cri.go:89] found id: ""
	I0804 00:18:12.374129   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.374138   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:12.374143   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:12.374235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:12.426467   64758 cri.go:89] found id: ""
	I0804 00:18:12.426493   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.426502   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:12.426509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:12.426570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:12.485034   64758 cri.go:89] found id: ""
	I0804 00:18:12.485060   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.485072   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:12.485079   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:12.485138   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:12.523490   64758 cri.go:89] found id: ""
	I0804 00:18:12.523517   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.523525   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:12.523530   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:12.523577   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:12.563318   64758 cri.go:89] found id: ""
	I0804 00:18:12.563347   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.563358   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:12.563365   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:12.563425   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:12.600455   64758 cri.go:89] found id: ""
	I0804 00:18:12.600482   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.600492   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:12.600499   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:12.600566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:12.641146   64758 cri.go:89] found id: ""
	I0804 00:18:12.641170   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.641178   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:12.641186   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:12.641197   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:12.697240   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:12.697274   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:12.711399   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:12.711432   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:12.794022   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:12.794050   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:12.794067   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:12.881327   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:12.881379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:15.425765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:15.439338   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:15.439420   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:15.477964   64758 cri.go:89] found id: ""
	I0804 00:18:15.477991   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.478002   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:15.478009   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:15.478069   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:15.514554   64758 cri.go:89] found id: ""
	I0804 00:18:15.514574   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.514583   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:15.514588   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:15.514636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:15.549702   64758 cri.go:89] found id: ""
	I0804 00:18:15.549732   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.549741   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:15.549747   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:15.549813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:15.584619   64758 cri.go:89] found id: ""
	I0804 00:18:15.584663   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.584675   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:15.584683   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:15.584746   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:15.625084   64758 cri.go:89] found id: ""
	I0804 00:18:15.625111   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.625121   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:15.625128   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:15.625192   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:15.666629   64758 cri.go:89] found id: ""
	I0804 00:18:15.666655   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.666664   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:15.666673   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:15.666726   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:15.704287   64758 cri.go:89] found id: ""
	I0804 00:18:15.704316   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.704324   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:15.704330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:15.704383   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:15.740629   64758 cri.go:89] found id: ""
	I0804 00:18:15.740659   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.740668   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:15.740678   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:15.740702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:15.794093   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:15.794124   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:15.807629   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:15.807659   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:15.887638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:15.887665   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:15.887680   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:15.972935   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:15.972978   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:13.611758   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:15.613472   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:13.327640   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:15.827432   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:16.100599   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:18.603192   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:18.518022   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:18.532360   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:18.532433   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:18.565519   64758 cri.go:89] found id: ""
	I0804 00:18:18.565544   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.565552   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:18.565557   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:18.565612   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:18.599939   64758 cri.go:89] found id: ""
	I0804 00:18:18.599967   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.599978   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:18.599985   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:18.600055   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:18.639035   64758 cri.go:89] found id: ""
	I0804 00:18:18.639062   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.639070   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:18.639076   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:18.639124   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:18.677188   64758 cri.go:89] found id: ""
	I0804 00:18:18.677210   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.677218   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:18.677223   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:18.677268   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:18.715892   64758 cri.go:89] found id: ""
	I0804 00:18:18.715921   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.715932   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:18.715940   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:18.716005   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:18.752274   64758 cri.go:89] found id: ""
	I0804 00:18:18.752298   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.752307   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:18.752313   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:18.752368   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:18.795251   64758 cri.go:89] found id: ""
	I0804 00:18:18.795279   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.795288   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:18.795293   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:18.795353   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.830842   64758 cri.go:89] found id: ""
	I0804 00:18:18.830866   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.830874   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:18.830882   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:18.830893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:18.883687   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:18.883719   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:18.898406   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:18.898433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:18.973191   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:18.973215   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:18.973231   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:19.054094   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:19.054141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:21.597245   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:21.612534   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:21.612605   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:21.649391   64758 cri.go:89] found id: ""
	I0804 00:18:21.649415   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.649426   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:21.649434   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:21.649492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:21.683202   64758 cri.go:89] found id: ""
	I0804 00:18:21.683226   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.683233   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:21.683244   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:21.683300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:21.717450   64758 cri.go:89] found id: ""
	I0804 00:18:21.717475   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.717484   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:21.717489   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:21.717547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:21.752559   64758 cri.go:89] found id: ""
	I0804 00:18:21.752588   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.752596   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:21.752602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:21.752650   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:21.788336   64758 cri.go:89] found id: ""
	I0804 00:18:21.788364   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.788375   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:21.788381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:21.788428   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:21.829404   64758 cri.go:89] found id: ""
	I0804 00:18:21.829428   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.829436   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:21.829443   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:21.829502   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:21.869473   64758 cri.go:89] found id: ""
	I0804 00:18:21.869504   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.869515   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:21.869521   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:21.869587   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.111377   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:20.610253   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:17.827889   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:20.327830   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:21.100486   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:23.599788   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:25.601620   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:21.909883   64758 cri.go:89] found id: ""
	I0804 00:18:21.909907   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.909915   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:21.909923   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:21.909940   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:21.925038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:21.925071   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:22.000261   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.000281   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:22.000294   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:22.082813   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:22.082846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:22.126741   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:22.126774   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:24.677246   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:24.692404   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:24.692467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:24.739001   64758 cri.go:89] found id: ""
	I0804 00:18:24.739039   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.739049   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:24.739054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:24.739119   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:24.779558   64758 cri.go:89] found id: ""
	I0804 00:18:24.779586   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.779597   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:24.779605   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:24.779664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:24.819257   64758 cri.go:89] found id: ""
	I0804 00:18:24.819284   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.819295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:24.819301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:24.819363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:24.862504   64758 cri.go:89] found id: ""
	I0804 00:18:24.862531   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.862539   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:24.862544   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:24.862599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:24.899605   64758 cri.go:89] found id: ""
	I0804 00:18:24.899637   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.899649   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:24.899656   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:24.899716   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:24.936575   64758 cri.go:89] found id: ""
	I0804 00:18:24.936604   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.936612   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:24.936618   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:24.936667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:24.971736   64758 cri.go:89] found id: ""
	I0804 00:18:24.971764   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.971775   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:24.971782   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:24.971851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:25.010214   64758 cri.go:89] found id: ""
	I0804 00:18:25.010244   64758 logs.go:276] 0 containers: []
	W0804 00:18:25.010253   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:25.010265   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:25.010279   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:25.091145   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:25.091186   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:25.137574   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:25.137603   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:25.189559   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:25.189593   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:25.204725   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:25.204763   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:25.278903   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.612077   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:25.111666   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:22.827542   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:24.829587   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:27.326922   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:28.100576   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:30.603955   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:27.779500   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:27.793548   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:27.793628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:27.830811   64758 cri.go:89] found id: ""
	I0804 00:18:27.830844   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.830854   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:27.830862   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:27.830919   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:27.869966   64758 cri.go:89] found id: ""
	I0804 00:18:27.869991   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.869998   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:27.870004   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:27.870062   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:27.909474   64758 cri.go:89] found id: ""
	I0804 00:18:27.909496   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.909504   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:27.909509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:27.909567   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:27.948588   64758 cri.go:89] found id: ""
	I0804 00:18:27.948613   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.948625   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:27.948632   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:27.948704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:27.991957   64758 cri.go:89] found id: ""
	I0804 00:18:27.991979   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.991987   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:27.991993   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:27.992052   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:28.029516   64758 cri.go:89] found id: ""
	I0804 00:18:28.029544   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.029555   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:28.029562   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:28.029627   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:28.067851   64758 cri.go:89] found id: ""
	I0804 00:18:28.067879   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.067891   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:28.067898   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:28.067957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:28.107488   64758 cri.go:89] found id: ""
	I0804 00:18:28.107514   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.107524   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:28.107534   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:28.107548   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:28.158490   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:28.158523   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:28.172000   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:28.172030   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:28.247803   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:28.247823   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:28.247839   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:28.326695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:28.326727   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:30.867241   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:30.881074   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:30.881146   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:30.919078   64758 cri.go:89] found id: ""
	I0804 00:18:30.919105   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.919115   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:30.919122   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:30.919184   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:30.954436   64758 cri.go:89] found id: ""
	I0804 00:18:30.954463   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.954474   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:30.954481   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:30.954546   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:30.993080   64758 cri.go:89] found id: ""
	I0804 00:18:30.993110   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.993121   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:30.993129   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:30.993188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:31.031465   64758 cri.go:89] found id: ""
	I0804 00:18:31.031493   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.031504   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:31.031512   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:31.031570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:31.067374   64758 cri.go:89] found id: ""
	I0804 00:18:31.067405   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.067416   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:31.067423   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:31.067493   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:31.104021   64758 cri.go:89] found id: ""
	I0804 00:18:31.104048   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.104059   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:31.104066   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:31.104128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:31.146995   64758 cri.go:89] found id: ""
	I0804 00:18:31.147023   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.147033   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:31.147040   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:31.147106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:31.184708   64758 cri.go:89] found id: ""
	I0804 00:18:31.184739   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.184749   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:31.184760   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:31.184776   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:31.237743   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:31.237781   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:31.252038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:31.252070   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:31.326357   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:31.326380   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:31.326401   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:31.408212   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:31.408256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:27.610666   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:29.610899   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:31.611472   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:29.827980   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:32.326666   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:33.099814   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:35.100740   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:33.954396   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:33.968311   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:33.968384   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:34.006574   64758 cri.go:89] found id: ""
	I0804 00:18:34.006605   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.006625   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:34.006635   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:34.006698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:34.042400   64758 cri.go:89] found id: ""
	I0804 00:18:34.042427   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.042435   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:34.042441   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:34.042492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:34.080769   64758 cri.go:89] found id: ""
	I0804 00:18:34.080793   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.080804   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:34.080810   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:34.080877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:34.118283   64758 cri.go:89] found id: ""
	I0804 00:18:34.118311   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.118320   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:34.118326   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:34.118377   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:34.153679   64758 cri.go:89] found id: ""
	I0804 00:18:34.153708   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.153719   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:34.153727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:34.153780   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:34.189618   64758 cri.go:89] found id: ""
	I0804 00:18:34.189674   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.189686   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:34.189696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:34.189770   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:34.224628   64758 cri.go:89] found id: ""
	I0804 00:18:34.224666   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.224677   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:34.224684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:34.224744   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:34.265343   64758 cri.go:89] found id: ""
	I0804 00:18:34.265389   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.265399   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:34.265409   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:34.265428   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:34.337992   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:34.338014   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:34.338025   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:34.420224   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:34.420263   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:34.462009   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:34.462042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:34.520087   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:34.520120   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:34.111351   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:36.112271   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:34.328807   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:36.827190   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:37.599447   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:40.099291   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:37.035398   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:37.048955   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:37.049024   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:37.087433   64758 cri.go:89] found id: ""
	I0804 00:18:37.087460   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.087470   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:37.087478   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:37.087542   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:37.128227   64758 cri.go:89] found id: ""
	I0804 00:18:37.128255   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.128267   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:37.128275   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:37.128328   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:37.165371   64758 cri.go:89] found id: ""
	I0804 00:18:37.165405   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.165415   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:37.165424   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:37.165486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:37.201168   64758 cri.go:89] found id: ""
	I0804 00:18:37.201198   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.201209   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:37.201217   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:37.201278   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:37.237378   64758 cri.go:89] found id: ""
	I0804 00:18:37.237406   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.237414   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:37.237419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:37.237465   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:37.273425   64758 cri.go:89] found id: ""
	I0804 00:18:37.273456   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.273467   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:37.273475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:37.273547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:37.313019   64758 cri.go:89] found id: ""
	I0804 00:18:37.313048   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.313056   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:37.313061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:37.313116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:37.354741   64758 cri.go:89] found id: ""
	I0804 00:18:37.354771   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.354779   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:37.354788   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:37.354800   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:37.408703   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:37.408740   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:37.423393   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:37.423419   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:37.497460   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:37.497487   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:37.497501   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:37.579811   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:37.579856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:40.122872   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:40.139106   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:40.139177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:40.178571   64758 cri.go:89] found id: ""
	I0804 00:18:40.178599   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.178610   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:40.178617   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:40.178679   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:40.215680   64758 cri.go:89] found id: ""
	I0804 00:18:40.215714   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.215722   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:40.215728   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:40.215776   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:40.250618   64758 cri.go:89] found id: ""
	I0804 00:18:40.250647   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.250658   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:40.250666   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:40.250729   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:40.289195   64758 cri.go:89] found id: ""
	I0804 00:18:40.289223   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.289233   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:40.289240   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:40.289296   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:40.330961   64758 cri.go:89] found id: ""
	I0804 00:18:40.330988   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.330998   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:40.331006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:40.331056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:40.376435   64758 cri.go:89] found id: ""
	I0804 00:18:40.376465   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.376478   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:40.376487   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:40.376558   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:40.416415   64758 cri.go:89] found id: ""
	I0804 00:18:40.416447   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.416459   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:40.416465   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:40.416535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:40.452958   64758 cri.go:89] found id: ""
	I0804 00:18:40.452996   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.453007   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:40.453018   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:40.453036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:40.503775   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:40.503808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:40.517825   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:40.517855   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:40.587818   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:40.587847   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:40.587861   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:40.674139   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:40.674183   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:38.611068   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:40.611923   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:39.326489   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:41.327327   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:42.100795   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:44.602441   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:43.217266   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:43.232190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:43.232262   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:43.270127   64758 cri.go:89] found id: ""
	I0804 00:18:43.270156   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.270163   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:43.270169   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:43.270219   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:43.309401   64758 cri.go:89] found id: ""
	I0804 00:18:43.309429   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.309439   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:43.309446   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:43.309503   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:43.347210   64758 cri.go:89] found id: ""
	I0804 00:18:43.347235   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.347242   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:43.347247   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:43.347300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:43.382548   64758 cri.go:89] found id: ""
	I0804 00:18:43.382578   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.382588   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:43.382595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:43.382658   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:43.422076   64758 cri.go:89] found id: ""
	I0804 00:18:43.422102   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.422113   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:43.422121   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:43.422168   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:43.458525   64758 cri.go:89] found id: ""
	I0804 00:18:43.458552   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.458560   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:43.458566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:43.458623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:43.498134   64758 cri.go:89] found id: ""
	I0804 00:18:43.498157   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.498165   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:43.498170   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:43.498217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:43.543289   64758 cri.go:89] found id: ""
	I0804 00:18:43.543312   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.543320   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:43.543328   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:43.543338   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:43.593489   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:43.593521   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:43.607838   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:43.607869   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:43.682791   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:43.682813   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:43.682826   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:43.761695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:43.761737   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.305385   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:46.320003   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:46.320063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:46.367941   64758 cri.go:89] found id: ""
	I0804 00:18:46.367969   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.367980   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:46.367986   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:46.368058   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:46.422540   64758 cri.go:89] found id: ""
	I0804 00:18:46.422563   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.422572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:46.422578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:46.422637   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:46.470192   64758 cri.go:89] found id: ""
	I0804 00:18:46.470238   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.470248   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:46.470257   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:46.470316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:46.512375   64758 cri.go:89] found id: ""
	I0804 00:18:46.512399   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.512408   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:46.512413   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:46.512471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:46.546547   64758 cri.go:89] found id: ""
	I0804 00:18:46.546580   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.546592   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:46.546600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:46.546665   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:46.583598   64758 cri.go:89] found id: ""
	I0804 00:18:46.583621   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.583630   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:46.583636   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:46.583692   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:46.621066   64758 cri.go:89] found id: ""
	I0804 00:18:46.621101   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.621116   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:46.621122   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:46.621177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:46.654115   64758 cri.go:89] found id: ""
	I0804 00:18:46.654149   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.654162   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:46.654174   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:46.654191   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:46.738542   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:46.738582   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.778894   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:46.778923   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:46.833225   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:46.833257   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:46.847222   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:46.847247   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:18:42.612522   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:45.110927   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:43.327420   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:45.327936   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:47.328380   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:46.604576   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.100232   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	W0804 00:18:46.922590   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.423639   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:49.437417   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:49.437490   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:49.474889   64758 cri.go:89] found id: ""
	I0804 00:18:49.474914   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.474923   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:49.474929   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:49.474986   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:49.512860   64758 cri.go:89] found id: ""
	I0804 00:18:49.512889   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.512900   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:49.512908   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:49.512965   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:49.550558   64758 cri.go:89] found id: ""
	I0804 00:18:49.550594   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.550603   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:49.550611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:49.550671   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:49.587779   64758 cri.go:89] found id: ""
	I0804 00:18:49.587810   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.587823   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:49.587831   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:49.587890   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:49.630307   64758 cri.go:89] found id: ""
	I0804 00:18:49.630333   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.630344   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:49.630352   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:49.630411   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:49.665013   64758 cri.go:89] found id: ""
	I0804 00:18:49.665046   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.665057   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:49.665064   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:49.665127   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:49.701375   64758 cri.go:89] found id: ""
	I0804 00:18:49.701401   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.701410   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:49.701415   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:49.701472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:49.737237   64758 cri.go:89] found id: ""
	I0804 00:18:49.737261   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.737269   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:49.737278   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:49.737291   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:49.790998   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:49.791033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:49.804933   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:49.804965   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:49.877997   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.878019   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:49.878035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:49.963836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:49.963872   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:47.611774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.612581   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.616560   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.827900   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.829950   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.599613   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:53.600496   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:52.506621   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:52.521482   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:52.521553   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:52.555980   64758 cri.go:89] found id: ""
	I0804 00:18:52.556010   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.556021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:52.556029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:52.556094   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:52.593088   64758 cri.go:89] found id: ""
	I0804 00:18:52.593119   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.593130   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:52.593138   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:52.593197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:52.632058   64758 cri.go:89] found id: ""
	I0804 00:18:52.632088   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.632107   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:52.632115   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:52.632177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:52.668701   64758 cri.go:89] found id: ""
	I0804 00:18:52.668730   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.668739   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:52.668745   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:52.668814   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:52.705041   64758 cri.go:89] found id: ""
	I0804 00:18:52.705068   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.705075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:52.705089   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:52.705149   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:52.743304   64758 cri.go:89] found id: ""
	I0804 00:18:52.743327   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.743335   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:52.743340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:52.743397   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:52.781020   64758 cri.go:89] found id: ""
	I0804 00:18:52.781050   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.781060   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:52.781073   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:52.781134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:52.820979   64758 cri.go:89] found id: ""
	I0804 00:18:52.821004   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.821014   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:52.821024   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:52.821042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:52.876450   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:52.876488   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:52.890529   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:52.890566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:52.960682   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:52.960710   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:52.960725   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:53.044000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:53.044040   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:55.601594   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:55.615574   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:55.615645   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:55.655116   64758 cri.go:89] found id: ""
	I0804 00:18:55.655146   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.655157   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:55.655164   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:55.655217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:55.695809   64758 cri.go:89] found id: ""
	I0804 00:18:55.695837   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.695846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:55.695851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:55.695909   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:55.732784   64758 cri.go:89] found id: ""
	I0804 00:18:55.732811   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.732822   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:55.732828   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:55.732920   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:55.773316   64758 cri.go:89] found id: ""
	I0804 00:18:55.773338   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.773347   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:55.773368   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:55.773416   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:55.808886   64758 cri.go:89] found id: ""
	I0804 00:18:55.808913   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.808924   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:55.808931   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:55.808990   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:55.848471   64758 cri.go:89] found id: ""
	I0804 00:18:55.848499   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.848507   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:55.848513   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:55.848568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:55.884088   64758 cri.go:89] found id: ""
	I0804 00:18:55.884117   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.884128   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:55.884134   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:55.884194   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:55.918194   64758 cri.go:89] found id: ""
	I0804 00:18:55.918222   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.918233   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:55.918243   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:55.918264   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:55.932685   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:55.932717   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:56.003817   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:56.003840   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:56.003856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:56.087804   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:56.087846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:56.129959   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:56.129993   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:54.111584   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.610664   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:54.327283   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.328332   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.100620   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.601669   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:00.604763   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.685077   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:58.698624   64758 kubeadm.go:597] duration metric: took 4m4.179874556s to restartPrimaryControlPlane
	W0804 00:18:58.698704   64758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:18:58.698731   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:18:58.611004   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:00.611252   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.828188   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:01.329218   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.100214   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:05.101275   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.967117   64758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.268366381s)
	I0804 00:19:03.967202   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:19:03.982098   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:19:03.991994   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:19:04.002780   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:19:04.002802   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:19:04.002845   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:19:04.012216   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:19:04.012279   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:19:04.021463   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:19:04.030689   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:19:04.030743   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:19:04.040801   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.050496   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:19:04.050558   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.060782   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:19:04.071595   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:19:04.071673   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:19:04.082123   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:19:04.313172   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:19:02.611712   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:05.111575   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.827427   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:06.327317   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:07.599775   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:09.599814   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:07.611608   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:10.110194   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:08.333681   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:10.829626   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:11.601081   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:14.099098   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:12.110388   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:14.111401   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:16.610774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:13.327035   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:15.327695   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:17.327749   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:16.100543   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:18.602723   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:20.603470   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:18.611336   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:21.111798   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:19.329120   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:21.826869   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:22.605600   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:25.101500   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:23.610581   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:25.610814   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:24.326982   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:26.827772   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:27.599557   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:29.600283   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:28.110748   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:30.111027   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:29.327031   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:31.328581   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:32.101571   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:34.601251   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:32.610784   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:34.612611   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:33.828237   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:35.828319   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:37.099717   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:39.100492   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:37.111009   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:39.610805   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:38.326730   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:40.327548   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:42.330066   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:41.600239   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:43.600686   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:45.601464   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:42.110900   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:44.610221   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:45.605124   65087 pod_ready.go:81] duration metric: took 4m0.000843677s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" ...
	E0804 00:19:45.605152   65087 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0804 00:19:45.605175   65087 pod_ready.go:38] duration metric: took 4m13.615224515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:19:45.605208   65087 kubeadm.go:597] duration metric: took 4m21.736484018s to restartPrimaryControlPlane
	W0804 00:19:45.605273   65087 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:19:45.605304   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:19:44.827547   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:47.329541   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:48.101237   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:50.603754   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:49.826561   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:51.828643   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:53.100714   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:55.102037   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:53.832996   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:54.830906   65441 pod_ready.go:81] duration metric: took 4m0.010324747s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	E0804 00:19:54.830936   65441 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0804 00:19:54.830947   65441 pod_ready.go:38] duration metric: took 4m4.842701336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:19:54.830968   65441 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:19:54.831003   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:19:54.831070   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:19:54.887772   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:54.887804   65441 cri.go:89] found id: ""
	I0804 00:19:54.887815   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:19:54.887877   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:54.892740   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:19:54.892801   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:19:54.943044   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:54.943082   65441 cri.go:89] found id: ""
	I0804 00:19:54.943092   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:19:54.943164   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:54.947699   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:19:54.947765   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:19:54.997280   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:54.997302   65441 cri.go:89] found id: ""
	I0804 00:19:54.997311   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:19:54.997380   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.005574   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:19:55.005642   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:19:55.066824   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:55.066845   65441 cri.go:89] found id: ""
	I0804 00:19:55.066852   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:19:55.066906   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.071713   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:19:55.071779   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:19:55.116381   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:55.116406   65441 cri.go:89] found id: ""
	I0804 00:19:55.116414   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:19:55.116468   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.121174   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:19:55.121237   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:19:55.168300   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:55.168323   65441 cri.go:89] found id: ""
	I0804 00:19:55.168331   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:19:55.168381   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.173450   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:19:55.173509   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:19:55.218999   65441 cri.go:89] found id: ""
	I0804 00:19:55.219030   65441 logs.go:276] 0 containers: []
	W0804 00:19:55.219041   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:19:55.219048   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:19:55.219115   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:19:55.263696   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:55.263723   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:55.263727   65441 cri.go:89] found id: ""
	I0804 00:19:55.263734   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:19:55.263789   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.269001   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.277864   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:19:55.277899   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:55.323692   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:19:55.323729   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:55.364971   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:19:55.365005   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:19:55.871942   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:19:55.871983   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:19:55.929828   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:19:55.929869   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:19:55.987389   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:19:55.987425   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:56.041330   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:19:56.041381   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:56.082524   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:19:56.082556   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:56.122545   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:19:56.122572   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:56.178249   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:19:56.178288   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:56.219273   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:19:56.219300   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:19:56.235345   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:19:56.235389   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:19:56.370660   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:19:56.370692   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:57.600248   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:00.100920   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:58.936934   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:19:58.953624   65441 api_server.go:72] duration metric: took 4m14.22488371s to wait for apiserver process to appear ...
	I0804 00:19:58.953655   65441 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:19:58.953700   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:19:58.953764   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:19:58.997408   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:58.997434   65441 cri.go:89] found id: ""
	I0804 00:19:58.997443   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:19:58.997492   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.004398   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:19:59.004466   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:19:59.041483   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:59.041510   65441 cri.go:89] found id: ""
	I0804 00:19:59.041518   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:19:59.041568   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.045754   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:19:59.045825   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:19:59.081738   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:59.081756   65441 cri.go:89] found id: ""
	I0804 00:19:59.081764   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:19:59.081809   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.086297   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:19:59.086348   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:19:59.124421   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:59.124440   65441 cri.go:89] found id: ""
	I0804 00:19:59.124447   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:19:59.124494   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.128612   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:19:59.128677   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:19:59.165702   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:59.165728   65441 cri.go:89] found id: ""
	I0804 00:19:59.165737   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:19:59.165791   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.170016   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:19:59.170103   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:19:59.205275   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:59.205299   65441 cri.go:89] found id: ""
	I0804 00:19:59.205307   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:19:59.205377   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.209637   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:19:59.209699   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:19:59.244254   65441 cri.go:89] found id: ""
	I0804 00:19:59.244281   65441 logs.go:276] 0 containers: []
	W0804 00:19:59.244290   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:19:59.244295   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:19:59.244343   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:19:59.281850   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:59.281876   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:59.281880   65441 cri.go:89] found id: ""
	I0804 00:19:59.281887   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:19:59.281935   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.286423   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.291108   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:19:59.291134   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:59.340778   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:19:59.340808   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:59.379258   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:19:59.379288   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:59.418902   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:19:59.418932   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:19:59.875668   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:19:59.875708   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:19:59.932947   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:19:59.932980   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:59.980190   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:19:59.980224   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:00.024331   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:20:00.024359   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:00.064676   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:20:00.064701   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:00.117684   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:20:00.117717   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:00.153654   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:20:00.153683   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:00.200840   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:00.200869   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:00.214380   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:00.214410   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:02.101240   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:04.600064   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:02.832546   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:20:02.837684   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 200:
	ok
	I0804 00:20:02.838736   65441 api_server.go:141] control plane version: v1.30.3
	I0804 00:20:02.838763   65441 api_server.go:131] duration metric: took 3.885096913s to wait for apiserver health ...
	I0804 00:20:02.838773   65441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:02.838798   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:02.838856   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:02.878530   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:20:02.878556   65441 cri.go:89] found id: ""
	I0804 00:20:02.878565   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:20:02.878628   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.883263   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:02.883338   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:02.921989   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:20:02.922009   65441 cri.go:89] found id: ""
	I0804 00:20:02.922017   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:20:02.922062   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.928690   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:02.928767   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:02.967469   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:20:02.967490   65441 cri.go:89] found id: ""
	I0804 00:20:02.967498   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:20:02.967544   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.972155   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:02.972217   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:03.011875   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:03.011900   65441 cri.go:89] found id: ""
	I0804 00:20:03.011910   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:20:03.011966   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.016326   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:03.016395   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:03.057114   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:03.057137   65441 cri.go:89] found id: ""
	I0804 00:20:03.057145   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:20:03.057206   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.061528   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:03.061592   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:03.101778   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:03.101832   65441 cri.go:89] found id: ""
	I0804 00:20:03.101842   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:20:03.101902   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.106292   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:03.106368   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:03.146453   65441 cri.go:89] found id: ""
	I0804 00:20:03.146484   65441 logs.go:276] 0 containers: []
	W0804 00:20:03.146496   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:03.146504   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:03.146569   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:03.185861   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:03.185884   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:20:03.185887   65441 cri.go:89] found id: ""
	I0804 00:20:03.185894   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:20:03.185941   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.190490   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.194727   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:03.194750   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:03.308015   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:20:03.308052   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:20:03.358699   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:20:03.358732   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:20:03.410398   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:20:03.410430   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:20:03.450651   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:03.450685   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:03.859092   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:20:03.859145   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:03.905500   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:20:03.905529   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:03.951014   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:03.951047   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:04.003275   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:04.003311   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:04.017574   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:20:04.017608   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:20:04.054252   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:20:04.054283   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:04.094524   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:20:04.094558   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:04.131163   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:20:04.131192   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:06.691154   65441 system_pods.go:59] 8 kube-system pods found
	I0804 00:20:06.691193   65441 system_pods.go:61] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running
	I0804 00:20:06.691199   65441 system_pods.go:61] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running
	I0804 00:20:06.691203   65441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running
	I0804 00:20:06.691209   65441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running
	I0804 00:20:06.691213   65441 system_pods.go:61] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:20:06.691218   65441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running
	I0804 00:20:06.691226   65441 system_pods.go:61] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:06.691232   65441 system_pods.go:61] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:20:06.691244   65441 system_pods.go:74] duration metric: took 3.852463199s to wait for pod list to return data ...
	I0804 00:20:06.691257   65441 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:06.693724   65441 default_sa.go:45] found service account: "default"
	I0804 00:20:06.693755   65441 default_sa.go:55] duration metric: took 2.486182ms for default service account to be created ...
	I0804 00:20:06.693767   65441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:06.698925   65441 system_pods.go:86] 8 kube-system pods found
	I0804 00:20:06.698950   65441 system_pods.go:89] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running
	I0804 00:20:06.698956   65441 system_pods.go:89] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running
	I0804 00:20:06.698962   65441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running
	I0804 00:20:06.698968   65441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running
	I0804 00:20:06.698972   65441 system_pods.go:89] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:20:06.698976   65441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running
	I0804 00:20:06.698983   65441 system_pods.go:89] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:06.698990   65441 system_pods.go:89] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:20:06.698997   65441 system_pods.go:126] duration metric: took 5.224971ms to wait for k8s-apps to be running ...
	I0804 00:20:06.699003   65441 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:06.699047   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:06.714188   65441 system_svc.go:56] duration metric: took 15.17801ms WaitForService to wait for kubelet
	I0804 00:20:06.714213   65441 kubeadm.go:582] duration metric: took 4m21.985480612s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:06.714232   65441 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:06.716717   65441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:06.716743   65441 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:06.716757   65441 node_conditions.go:105] duration metric: took 2.521245ms to run NodePressure ...
	I0804 00:20:06.716771   65441 start.go:241] waiting for startup goroutines ...
	I0804 00:20:06.716780   65441 start.go:246] waiting for cluster config update ...
	I0804 00:20:06.716796   65441 start.go:255] writing updated cluster config ...
	I0804 00:20:06.717156   65441 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:06.765983   65441 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:20:06.768482   65441 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-969068" cluster and "default" namespace by default
	I0804 00:20:06.600233   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:08.603829   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:11.852948   65087 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.247618249s)
	I0804 00:20:11.853025   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:11.870882   65087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:20:11.882005   65087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:20:11.892505   65087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:20:11.892527   65087 kubeadm.go:157] found existing configuration files:
	
	I0804 00:20:11.892570   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:20:11.902005   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:20:11.902061   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:20:11.911585   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:20:11.921837   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:20:11.921911   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:20:11.101091   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:13.607073   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:14.600605   64502 pod_ready.go:81] duration metric: took 4m0.007136508s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	E0804 00:20:14.600629   64502 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0804 00:20:14.600637   64502 pod_ready.go:38] duration metric: took 4m5.120472791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:14.600651   64502 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:20:14.600675   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:14.600717   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:14.669699   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:14.669724   64502 cri.go:89] found id: ""
	I0804 00:20:14.669733   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:14.669789   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.674907   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:14.674978   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:14.720830   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:14.720867   64502 cri.go:89] found id: ""
	I0804 00:20:14.720877   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:14.720937   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.726667   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:14.726729   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:14.778216   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:14.778247   64502 cri.go:89] found id: ""
	I0804 00:20:14.778256   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:14.778321   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.785349   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:14.785433   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:14.836381   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:14.836408   64502 cri.go:89] found id: ""
	I0804 00:20:14.836416   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:14.836475   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.841662   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:14.841752   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:14.884803   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:14.884827   64502 cri.go:89] found id: ""
	I0804 00:20:14.884836   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:14.884904   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.890625   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:14.890696   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:14.942713   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:14.942732   64502 cri.go:89] found id: ""
	I0804 00:20:14.942739   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:14.942800   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.948335   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:14.948391   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:14.994869   64502 cri.go:89] found id: ""
	I0804 00:20:14.994900   64502 logs.go:276] 0 containers: []
	W0804 00:20:14.994910   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:14.994917   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:14.994985   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:15.034528   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:15.034557   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:15.034564   64502 cri.go:89] found id: ""
	I0804 00:20:15.034574   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:15.034633   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:15.039335   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:15.044600   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:15.044625   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:15.091365   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:15.091398   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:15.144896   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:15.144924   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:15.675849   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:15.675901   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:15.691640   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:15.691699   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:11.931844   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:20:11.941369   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:20:11.941430   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:20:11.951279   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:20:11.961201   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:20:11.961275   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:20:11.972150   65087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:20:12.024567   65087 kubeadm.go:310] W0804 00:20:12.001791    2996 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0804 00:20:12.025287   65087 kubeadm.go:310] W0804 00:20:12.002530    2996 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0804 00:20:12.154034   65087 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:20:20.358593   65087 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0804 00:20:20.358649   65087 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:20:20.358721   65087 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:20:20.358834   65087 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:20:20.358953   65087 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 00:20:20.359013   65087 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:20:20.360498   65087 out.go:204]   - Generating certificates and keys ...
	I0804 00:20:20.360590   65087 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:20:20.360692   65087 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:20:20.360767   65087 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:20:20.360821   65087 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:20:20.360915   65087 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:20:20.360971   65087 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:20:20.361042   65087 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:20:20.361124   65087 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:20:20.361228   65087 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:20:20.361307   65087 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:20:20.361342   65087 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:20:20.361436   65087 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:20:20.361523   65087 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:20:20.361592   65087 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 00:20:20.361642   65087 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:20:20.361698   65087 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:20:20.361746   65087 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:20:20.361815   65087 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:20:20.361881   65087 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:20:20.363214   65087 out.go:204]   - Booting up control plane ...
	I0804 00:20:20.363312   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:20:20.363381   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:20:20.363450   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:20:20.363541   65087 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:20:20.363628   65087 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:20:20.363678   65087 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:20:20.363790   65087 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 00:20:20.363889   65087 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 00:20:20.363960   65087 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.009132208s
	I0804 00:20:20.364044   65087 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 00:20:20.364094   65087 kubeadm.go:310] [api-check] The API server is healthy after 4.501833932s
	I0804 00:20:20.364201   65087 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 00:20:20.364321   65087 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 00:20:20.364397   65087 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 00:20:20.364585   65087 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-118016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 00:20:20.364634   65087 kubeadm.go:310] [bootstrap-token] Using token: bbnfwa.jorg7huedw5cbtk2
	I0804 00:20:20.366569   65087 out.go:204]   - Configuring RBAC rules ...
	I0804 00:20:20.366705   65087 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 00:20:20.366823   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 00:20:20.366979   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 00:20:20.367160   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 00:20:20.367275   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 00:20:20.367352   65087 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 00:20:20.367447   65087 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 00:20:20.367510   65087 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 00:20:20.367574   65087 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 00:20:20.367580   65087 kubeadm.go:310] 
	I0804 00:20:20.367629   65087 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 00:20:20.367635   65087 kubeadm.go:310] 
	I0804 00:20:20.367697   65087 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 00:20:20.367703   65087 kubeadm.go:310] 
	I0804 00:20:20.367724   65087 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 00:20:20.367784   65087 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 00:20:20.367828   65087 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 00:20:20.367834   65087 kubeadm.go:310] 
	I0804 00:20:20.367886   65087 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 00:20:20.367903   65087 kubeadm.go:310] 
	I0804 00:20:20.367971   65087 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 00:20:20.367981   65087 kubeadm.go:310] 
	I0804 00:20:20.368050   65087 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 00:20:20.368125   65087 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 00:20:20.368187   65087 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 00:20:20.368193   65087 kubeadm.go:310] 
	I0804 00:20:20.368262   65087 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 00:20:20.368353   65087 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 00:20:20.368367   65087 kubeadm.go:310] 
	I0804 00:20:20.368480   65087 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bbnfwa.jorg7huedw5cbtk2 \
	I0804 00:20:20.368588   65087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0804 00:20:20.368614   65087 kubeadm.go:310] 	--control-plane 
	I0804 00:20:20.368621   65087 kubeadm.go:310] 
	I0804 00:20:20.368705   65087 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 00:20:20.368712   65087 kubeadm.go:310] 
	I0804 00:20:20.368810   65087 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bbnfwa.jorg7huedw5cbtk2 \
	I0804 00:20:20.368933   65087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0804 00:20:20.368947   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:20:20.368957   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:20:20.370303   65087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:20:15.859131   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:15.859169   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:15.917686   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:15.917726   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:15.964285   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:15.964328   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:16.019646   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:16.019679   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:16.069379   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:16.069416   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:16.129752   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:16.129842   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:16.177015   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:16.177043   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:16.220526   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:16.220560   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:18.771509   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:20:18.793252   64502 api_server.go:72] duration metric: took 4m15.042389156s to wait for apiserver process to appear ...
	I0804 00:20:18.793291   64502 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:20:18.793334   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:18.793415   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:18.839339   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:18.839363   64502 cri.go:89] found id: ""
	I0804 00:20:18.839372   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:18.839432   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.843932   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:18.844005   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:18.894398   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:18.894422   64502 cri.go:89] found id: ""
	I0804 00:20:18.894432   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:18.894491   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.899596   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:18.899664   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:18.947077   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:18.947106   64502 cri.go:89] found id: ""
	I0804 00:20:18.947114   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:18.947168   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.952349   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:18.952431   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:18.999336   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:18.999361   64502 cri.go:89] found id: ""
	I0804 00:20:18.999377   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:18.999441   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.005419   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:19.005502   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:19.061388   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:19.061413   64502 cri.go:89] found id: ""
	I0804 00:20:19.061422   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:19.061476   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.066071   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:19.066139   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:19.111849   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:19.111872   64502 cri.go:89] found id: ""
	I0804 00:20:19.111879   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:19.111929   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.116272   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:19.116323   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:19.157387   64502 cri.go:89] found id: ""
	I0804 00:20:19.157414   64502 logs.go:276] 0 containers: []
	W0804 00:20:19.157423   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:19.157431   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:19.157493   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:19.199627   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:19.199654   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:19.199660   64502 cri.go:89] found id: ""
	I0804 00:20:19.199669   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:19.199727   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.204317   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.208707   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:19.208729   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:19.261684   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:19.261717   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:19.309861   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:19.309890   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:19.349376   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:19.349403   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:19.388561   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:19.388590   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:19.466119   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:19.466163   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:19.515539   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:19.515575   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:19.561529   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:19.561556   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:19.626188   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:19.626219   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:19.640348   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:19.640372   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:19.772397   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:19.772439   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:19.827217   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:19.827260   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:20.306543   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:20.306589   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:20.371388   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:20:20.384738   65087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:20:20.404547   65087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:20:20.404607   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:20.404659   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-118016 minikube.k8s.io/updated_at=2024_08_04T00_20_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=no-preload-118016 minikube.k8s.io/primary=true
	I0804 00:20:20.602476   65087 ops.go:34] apiserver oom_adj: -16
	I0804 00:20:20.602551   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:21.103011   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:21.602888   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:22.102779   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:22.603282   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:23.103337   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:23.603522   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.103510   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.603474   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.689895   65087 kubeadm.go:1113] duration metric: took 4.285337247s to wait for elevateKubeSystemPrivileges
	I0804 00:20:24.689931   65087 kubeadm.go:394] duration metric: took 5m0.881315877s to StartCluster
	I0804 00:20:24.689947   65087 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:24.690018   65087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:20:24.691559   65087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:24.691784   65087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:20:24.691848   65087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:20:24.691963   65087 addons.go:69] Setting storage-provisioner=true in profile "no-preload-118016"
	I0804 00:20:24.691977   65087 addons.go:69] Setting default-storageclass=true in profile "no-preload-118016"
	I0804 00:20:24.691999   65087 addons.go:234] Setting addon storage-provisioner=true in "no-preload-118016"
	I0804 00:20:24.692001   65087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-118016"
	I0804 00:20:24.692001   65087 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:20:24.692018   65087 addons.go:69] Setting metrics-server=true in profile "no-preload-118016"
	W0804 00:20:24.692007   65087 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:20:24.692068   65087 addons.go:234] Setting addon metrics-server=true in "no-preload-118016"
	I0804 00:20:24.692086   65087 host.go:66] Checking if "no-preload-118016" exists ...
	W0804 00:20:24.692099   65087 addons.go:243] addon metrics-server should already be in state true
	I0804 00:20:24.692142   65087 host.go:66] Checking if "no-preload-118016" exists ...
	I0804 00:20:24.692440   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692464   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.692494   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692441   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692517   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.692566   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.693590   65087 out.go:177] * Verifying Kubernetes components...
	I0804 00:20:24.695139   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:20:24.708841   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0804 00:20:24.709324   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.709911   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.709937   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.710305   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.710594   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.712827   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0804 00:20:24.712894   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0804 00:20:24.713349   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.713375   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.713884   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.713899   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.713923   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.713942   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.714211   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.714264   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.714421   65087 addons.go:234] Setting addon default-storageclass=true in "no-preload-118016"
	W0804 00:20:24.714440   65087 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:20:24.714468   65087 host.go:66] Checking if "no-preload-118016" exists ...
	I0804 00:20:24.714605   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.714623   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.714801   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.714846   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.714993   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.715014   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.730476   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0804 00:20:24.730811   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36995
	I0804 00:20:24.730912   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.731145   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.731470   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.731486   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.731733   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.731748   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.731808   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.732034   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.732123   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.732294   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.733677   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0804 00:20:24.734185   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.734257   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.734306   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.734689   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.734709   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.735090   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.735618   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.735643   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.736977   65087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:20:24.736979   65087 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:20:22.853589   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:20:22.859439   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0804 00:20:22.860503   64502 api_server.go:141] control plane version: v1.30.3
	I0804 00:20:22.860521   64502 api_server.go:131] duration metric: took 4.067223392s to wait for apiserver health ...
	I0804 00:20:22.860528   64502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:22.860550   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:22.860598   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:22.901174   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:22.901193   64502 cri.go:89] found id: ""
	I0804 00:20:22.901200   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:22.901246   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.905319   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:22.905404   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:22.948354   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:22.948378   64502 cri.go:89] found id: ""
	I0804 00:20:22.948387   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:22.948438   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.952776   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:22.952863   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:22.989339   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:22.989376   64502 cri.go:89] found id: ""
	I0804 00:20:22.989385   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:22.989443   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.993833   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:22.993909   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:23.035367   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:23.035385   64502 cri.go:89] found id: ""
	I0804 00:20:23.035392   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:23.035434   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.040184   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:23.040259   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:23.078508   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:23.078529   64502 cri.go:89] found id: ""
	I0804 00:20:23.078538   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:23.078601   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.082907   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:23.082969   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:23.120846   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:23.120870   64502 cri.go:89] found id: ""
	I0804 00:20:23.120880   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:23.120943   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.125641   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:23.125702   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:23.172188   64502 cri.go:89] found id: ""
	I0804 00:20:23.172212   64502 logs.go:276] 0 containers: []
	W0804 00:20:23.172224   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:23.172232   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:23.172297   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:23.218188   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:23.218207   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:23.218211   64502 cri.go:89] found id: ""
	I0804 00:20:23.218217   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:23.218268   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.222562   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.226965   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:23.226989   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:23.269384   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:23.269414   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:23.309148   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:23.309178   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:23.356908   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:23.356936   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:23.395352   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:23.395381   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:23.450901   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:23.450925   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:23.488908   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:23.488945   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:23.551780   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:23.551808   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:23.975030   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:23.975070   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:24.035464   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:24.035497   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:24.053118   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:24.053148   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:24.197157   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:24.197189   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:24.254049   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:24.254083   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:24.738735   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:20:24.738757   65087 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:20:24.738785   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.738836   65087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:20:24.738847   65087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:20:24.738860   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.742131   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742459   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742539   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.742569   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742690   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.742968   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.743009   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.743254   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.743142   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.743174   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.743387   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.743447   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.743590   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.743720   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.754983   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0804 00:20:24.755436   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.755877   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.755901   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.756229   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.756454   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.758285   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.758520   65087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:20:24.758537   65087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:20:24.758555   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.761268   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.761715   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.761739   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.762001   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.762211   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.762402   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.762593   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.942323   65087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:20:24.971293   65087 node_ready.go:35] waiting up to 6m0s for node "no-preload-118016" to be "Ready" ...
	I0804 00:20:24.991406   65087 node_ready.go:49] node "no-preload-118016" has status "Ready":"True"
	I0804 00:20:24.991428   65087 node_ready.go:38] duration metric: took 20.101499ms for node "no-preload-118016" to be "Ready" ...
	I0804 00:20:24.991436   65087 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:25.004484   65087 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:25.069407   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:20:25.069437   65087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:20:25.093645   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:20:25.178590   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:20:25.178615   65087 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:20:25.246634   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:20:25.272880   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:20:25.272916   65087 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:20:25.368517   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:20:25.442382   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.442406   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.442668   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:25.442711   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.442717   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.442726   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.442732   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.444425   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.444456   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.444463   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:25.451275   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.451298   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.451605   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.451620   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.451617   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218075   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.218105   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.218391   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.218416   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.218427   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.218435   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.218440   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218702   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218764   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.218786   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.671629   65087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.303057537s)
	I0804 00:20:26.671683   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.671702   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.672010   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.672031   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.672041   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.672049   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.672327   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.672363   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.672378   65087 addons.go:475] Verifying addon metrics-server=true in "no-preload-118016"
	I0804 00:20:26.674374   65087 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0804 00:20:26.803868   64502 system_pods.go:59] 8 kube-system pods found
	I0804 00:20:26.803909   64502 system_pods.go:61] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running
	I0804 00:20:26.803917   64502 system_pods.go:61] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running
	I0804 00:20:26.803923   64502 system_pods.go:61] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running
	I0804 00:20:26.803928   64502 system_pods.go:61] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running
	I0804 00:20:26.803934   64502 system_pods.go:61] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:20:26.803940   64502 system_pods.go:61] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running
	I0804 00:20:26.803948   64502 system_pods.go:61] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:26.803957   64502 system_pods.go:61] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:20:26.803966   64502 system_pods.go:74] duration metric: took 3.943432992s to wait for pod list to return data ...
	I0804 00:20:26.803978   64502 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:26.808760   64502 default_sa.go:45] found service account: "default"
	I0804 00:20:26.808786   64502 default_sa.go:55] duration metric: took 4.797226ms for default service account to be created ...
	I0804 00:20:26.808796   64502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:26.814721   64502 system_pods.go:86] 8 kube-system pods found
	I0804 00:20:26.814753   64502 system_pods.go:89] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running
	I0804 00:20:26.814761   64502 system_pods.go:89] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running
	I0804 00:20:26.814768   64502 system_pods.go:89] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running
	I0804 00:20:26.814774   64502 system_pods.go:89] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running
	I0804 00:20:26.814780   64502 system_pods.go:89] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:20:26.814787   64502 system_pods.go:89] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running
	I0804 00:20:26.814798   64502 system_pods.go:89] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:26.814807   64502 system_pods.go:89] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:20:26.814819   64502 system_pods.go:126] duration metric: took 6.01558ms to wait for k8s-apps to be running ...
	I0804 00:20:26.814828   64502 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:26.814894   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:26.837462   64502 system_svc.go:56] duration metric: took 22.624089ms WaitForService to wait for kubelet
	I0804 00:20:26.837494   64502 kubeadm.go:582] duration metric: took 4m23.086636256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:26.837522   64502 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:26.841517   64502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:26.841548   64502 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:26.841563   64502 node_conditions.go:105] duration metric: took 4.034693ms to run NodePressure ...
	I0804 00:20:26.841576   64502 start.go:241] waiting for startup goroutines ...
	I0804 00:20:26.841586   64502 start.go:246] waiting for cluster config update ...
	I0804 00:20:26.841600   64502 start.go:255] writing updated cluster config ...
	I0804 00:20:26.841939   64502 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:26.908142   64502 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:20:26.910191   64502 out.go:177] * Done! kubectl is now configured to use "embed-certs-877598" cluster and "default" namespace by default
	I0804 00:20:26.675679   65087 addons.go:510] duration metric: took 1.98382947s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0804 00:20:27.012226   65087 pod_ready.go:102] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:29.511886   65087 pod_ready.go:102] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:32.011000   65087 pod_ready.go:92] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:32.011021   65087 pod_ready.go:81] duration metric: took 7.006508451s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:32.011031   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.518235   65087 pod_ready.go:92] pod "kube-apiserver-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.518260   65087 pod_ready.go:81] duration metric: took 1.507219524s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.518270   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.522894   65087 pod_ready.go:92] pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.522916   65087 pod_ready.go:81] duration metric: took 4.639763ms for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.522928   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4jqng" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.527271   65087 pod_ready.go:92] pod "kube-proxy-4jqng" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.527291   65087 pod_ready.go:81] duration metric: took 4.353851ms for pod "kube-proxy-4jqng" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.527303   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.531405   65087 pod_ready.go:92] pod "kube-scheduler-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.531424   65087 pod_ready.go:81] duration metric: took 4.113418ms for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.531433   65087 pod_ready.go:38] duration metric: took 8.539987559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:33.531449   65087 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:20:33.531505   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:20:33.546783   65087 api_server.go:72] duration metric: took 8.854972636s to wait for apiserver process to appear ...
	I0804 00:20:33.546813   65087 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:20:33.546832   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:20:33.551131   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0804 00:20:33.552092   65087 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:20:33.552112   65087 api_server.go:131] duration metric: took 5.292367ms to wait for apiserver health ...
	I0804 00:20:33.552119   65087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:33.557965   65087 system_pods.go:59] 9 kube-system pods found
	I0804 00:20:33.557987   65087 system_pods.go:61] "coredns-6f6b679f8f-gg97s" [28bfbbe9-5051-4674-8b43-f07bfdbc6916] Running
	I0804 00:20:33.557995   65087 system_pods.go:61] "coredns-6f6b679f8f-lj494" [74baae1c-e4c4-4125-aa9d-aeaac74a6ecd] Running
	I0804 00:20:33.558000   65087 system_pods.go:61] "etcd-no-preload-118016" [19ff6386-b0c0-41f7-89fa-fd62e8698b05] Running
	I0804 00:20:33.558005   65087 system_pods.go:61] "kube-apiserver-no-preload-118016" [d791bfcb-00d1-47b8-a9c2-ac8e68af4062] Running
	I0804 00:20:33.558009   65087 system_pods.go:61] "kube-controller-manager-no-preload-118016" [cef9e6fa-7a9d-4d84-8693-216d2eeab428] Running
	I0804 00:20:33.558014   65087 system_pods.go:61] "kube-proxy-4jqng" [c254599f-e58d-4d0a-81c9-1c98c0341f26] Running
	I0804 00:20:33.558018   65087 system_pods.go:61] "kube-scheduler-no-preload-118016" [0deea66f-2336-4371-9492-5af84f3f0fe8] Running
	I0804 00:20:33.558026   65087 system_pods.go:61] "metrics-server-6867b74b74-9gw27" [2f3cdf21-9e68-49b9-a6e0-927465738f23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:33.558035   65087 system_pods.go:61] "storage-provisioner" [07fdb5fa-a2e9-4d3d-8149-25720c320d51] Running
	I0804 00:20:33.558045   65087 system_pods.go:74] duration metric: took 5.921154ms to wait for pod list to return data ...
	I0804 00:20:33.558057   65087 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:33.608139   65087 default_sa.go:45] found service account: "default"
	I0804 00:20:33.608164   65087 default_sa.go:55] duration metric: took 50.097877ms for default service account to be created ...
	I0804 00:20:33.608174   65087 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:33.811878   65087 system_pods.go:86] 9 kube-system pods found
	I0804 00:20:33.811906   65087 system_pods.go:89] "coredns-6f6b679f8f-gg97s" [28bfbbe9-5051-4674-8b43-f07bfdbc6916] Running
	I0804 00:20:33.811912   65087 system_pods.go:89] "coredns-6f6b679f8f-lj494" [74baae1c-e4c4-4125-aa9d-aeaac74a6ecd] Running
	I0804 00:20:33.811916   65087 system_pods.go:89] "etcd-no-preload-118016" [19ff6386-b0c0-41f7-89fa-fd62e8698b05] Running
	I0804 00:20:33.811920   65087 system_pods.go:89] "kube-apiserver-no-preload-118016" [d791bfcb-00d1-47b8-a9c2-ac8e68af4062] Running
	I0804 00:20:33.811925   65087 system_pods.go:89] "kube-controller-manager-no-preload-118016" [cef9e6fa-7a9d-4d84-8693-216d2eeab428] Running
	I0804 00:20:33.811928   65087 system_pods.go:89] "kube-proxy-4jqng" [c254599f-e58d-4d0a-81c9-1c98c0341f26] Running
	I0804 00:20:33.811932   65087 system_pods.go:89] "kube-scheduler-no-preload-118016" [0deea66f-2336-4371-9492-5af84f3f0fe8] Running
	I0804 00:20:33.811939   65087 system_pods.go:89] "metrics-server-6867b74b74-9gw27" [2f3cdf21-9e68-49b9-a6e0-927465738f23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:33.811943   65087 system_pods.go:89] "storage-provisioner" [07fdb5fa-a2e9-4d3d-8149-25720c320d51] Running
	I0804 00:20:33.811950   65087 system_pods.go:126] duration metric: took 203.770479ms to wait for k8s-apps to be running ...
	I0804 00:20:33.811957   65087 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:33.812000   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:33.827146   65087 system_svc.go:56] duration metric: took 15.17867ms WaitForService to wait for kubelet
	I0804 00:20:33.827176   65087 kubeadm.go:582] duration metric: took 9.135367695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:33.827199   65087 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:34.009032   65087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:34.009056   65087 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:34.009076   65087 node_conditions.go:105] duration metric: took 181.872031ms to run NodePressure ...
	I0804 00:20:34.009086   65087 start.go:241] waiting for startup goroutines ...
	I0804 00:20:34.009112   65087 start.go:246] waiting for cluster config update ...
	I0804 00:20:34.009128   65087 start.go:255] writing updated cluster config ...
	I0804 00:20:34.009423   65087 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:34.059796   65087 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0804 00:20:34.061903   65087 out.go:177] * Done! kubectl is now configured to use "no-preload-118016" cluster and "default" namespace by default
	I0804 00:21:00.664979   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:21:00.665100   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:21:00.666810   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:00.666904   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:00.667020   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:00.667150   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:00.667278   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:00.667370   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:00.670254   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:00.670337   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:00.670431   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:00.670537   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:00.670623   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:00.670721   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:00.670788   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:00.670883   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:00.670990   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:00.671079   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:00.671168   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:00.671217   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:00.671265   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:00.671359   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:00.671442   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:00.671529   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:00.671611   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:00.671756   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:00.671856   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:00.671888   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:00.671940   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:00.673410   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:00.673506   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:00.673573   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:00.673627   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:00.673692   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:00.673828   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:00.673876   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:00.673972   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674207   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674283   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674517   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674590   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674752   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674851   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675053   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675173   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675451   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675463   64758 kubeadm.go:310] 
	I0804 00:21:00.675511   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:21:00.675561   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:21:00.675571   64758 kubeadm.go:310] 
	I0804 00:21:00.675614   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:21:00.675656   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:21:00.675787   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:21:00.675797   64758 kubeadm.go:310] 
	I0804 00:21:00.675928   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:21:00.675970   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:21:00.676009   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:21:00.676026   64758 kubeadm.go:310] 
	I0804 00:21:00.676172   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:21:00.676278   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:21:00.676289   64758 kubeadm.go:310] 
	I0804 00:21:00.676393   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:21:00.676466   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:21:00.676532   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:21:00.676609   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:21:00.676632   64758 kubeadm.go:310] 
	W0804 00:21:00.676723   64758 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 00:21:00.676765   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:21:01.138781   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:21:01.154405   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:21:01.164426   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:21:01.164445   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:21:01.164496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:21:01.173853   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:21:01.173907   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:21:01.183634   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:21:01.193283   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:21:01.193342   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:21:01.202427   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.212186   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:21:01.212235   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.222754   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:21:01.232996   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:21:01.233059   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:21:01.243778   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:21:01.319895   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:01.319975   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:01.474907   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:01.475029   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:01.475119   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:01.683624   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:01.685482   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:01.685584   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:01.685691   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:01.685792   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:01.685880   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:01.685991   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:01.686067   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:01.686147   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:01.686285   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:01.686399   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:01.686513   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:01.686600   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:01.686670   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:01.985613   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:02.088377   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:02.336621   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:02.448654   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:02.470140   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:02.471390   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:02.471456   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:02.610840   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:02.612641   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:02.612744   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:02.627044   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:02.629019   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:02.630430   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:02.633022   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:42.635581   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:42.635740   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:42.636036   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:47.636656   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:47.636879   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:57.637900   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:57.638098   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:17.638425   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:17.638634   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637807   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:57.637988   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637996   64758 kubeadm.go:310] 
	I0804 00:22:57.638035   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:22:57.638079   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:22:57.638085   64758 kubeadm.go:310] 
	I0804 00:22:57.638118   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:22:57.638148   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:22:57.638288   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:22:57.638309   64758 kubeadm.go:310] 
	I0804 00:22:57.638426   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:22:57.638507   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:22:57.638619   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:22:57.638640   64758 kubeadm.go:310] 
	I0804 00:22:57.638829   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:22:57.638944   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:22:57.638959   64758 kubeadm.go:310] 
	I0804 00:22:57.639107   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:22:57.639191   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:22:57.639300   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:22:57.639399   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:22:57.639412   64758 kubeadm.go:310] 
	I0804 00:22:57.639782   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:22:57.639904   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:22:57.640012   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:22:57.640091   64758 kubeadm.go:394] duration metric: took 8m3.172057529s to StartCluster
	I0804 00:22:57.640138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:22:57.640202   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:22:57.684020   64758 cri.go:89] found id: ""
	I0804 00:22:57.684054   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.684064   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:22:57.684072   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:22:57.684134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:22:57.722756   64758 cri.go:89] found id: ""
	I0804 00:22:57.722780   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.722788   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:22:57.722793   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:22:57.722851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:22:57.760371   64758 cri.go:89] found id: ""
	I0804 00:22:57.760400   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.760412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:22:57.760419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:22:57.760476   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:22:57.796114   64758 cri.go:89] found id: ""
	I0804 00:22:57.796144   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.796155   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:22:57.796162   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:22:57.796211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:22:57.842148   64758 cri.go:89] found id: ""
	I0804 00:22:57.842179   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.842191   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:22:57.842198   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:22:57.842286   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:22:57.914193   64758 cri.go:89] found id: ""
	I0804 00:22:57.914218   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.914229   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:22:57.914236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:22:57.914290   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:22:57.965944   64758 cri.go:89] found id: ""
	I0804 00:22:57.965973   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.965984   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:22:57.965991   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:22:57.966063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:22:58.003016   64758 cri.go:89] found id: ""
	I0804 00:22:58.003044   64758 logs.go:276] 0 containers: []
	W0804 00:22:58.003055   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:22:58.003066   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:22:58.003093   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:22:58.017277   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:22:58.017304   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:22:58.094192   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:22:58.094214   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:22:58.094227   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:22:58.210901   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:22:58.210944   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:22:58.249283   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:22:58.249317   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 00:22:58.300998   64758 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0804 00:22:58.301054   64758 out.go:239] * 
	W0804 00:22:58.301115   64758 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.301137   64758 out.go:239] * 
	W0804 00:22:58.301978   64758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:22:58.305305   64758 out.go:177] 
	W0804 00:22:58.306722   64758 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.306816   64758 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0804 00:22:58.306848   64758 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0804 00:22:58.308372   64758 out.go:177] 
	
	
	==> CRI-O <==
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.899708964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731523899675462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=960329b0-94da-49cf-911b-c30b6c718250 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.900331781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9cecedf-466f-4413-a730-cc2509462f03 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.900402990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9cecedf-466f-4413-a730-cc2509462f03 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.900460219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d9cecedf-466f-4413-a730-cc2509462f03 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.948177521Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c0e0bda-cf76-4ef8-8e43-43ac85c54d4b name=/runtime.v1.RuntimeService/Version
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.948300404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c0e0bda-cf76-4ef8-8e43-43ac85c54d4b name=/runtime.v1.RuntimeService/Version
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.949904797Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36b04eef-3965-4a28-9081-91a4d7c7aba7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.950358632Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731523950329435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36b04eef-3965-4a28-9081-91a4d7c7aba7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.950997913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c81f02dc-b8e5-4862-b831-2b8462087f36 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.951084641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c81f02dc-b8e5-4862-b831-2b8462087f36 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.951131308Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c81f02dc-b8e5-4862-b831-2b8462087f36 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.989833625Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19bc016e-194f-4bf7-87b3-7401ba210497 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.989936010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19bc016e-194f-4bf7-87b3-7401ba210497 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.991143429Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0f4aa41-6274-498e-8da4-0d46b1af6829 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.991673818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731523991638170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0f4aa41-6274-498e-8da4-0d46b1af6829 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.992329934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=955bd2e9-f8c4-4c25-922a-b0d07b4f6068 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.992398810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=955bd2e9-f8c4-4c25-922a-b0d07b4f6068 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:32:03 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:03.992441537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=955bd2e9-f8c4-4c25-922a-b0d07b4f6068 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:32:04 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:04.023955217Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf28f961-a4d4-4b49-a541-2ab2b956bbbf name=/runtime.v1.RuntimeService/Version
	Aug 04 00:32:04 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:04.024087447Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf28f961-a4d4-4b49-a541-2ab2b956bbbf name=/runtime.v1.RuntimeService/Version
	Aug 04 00:32:04 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:04.025413592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8af36cb-fd13-49ab-8564-cc9482dd5989 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:32:04 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:04.025817275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731524025790457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8af36cb-fd13-49ab-8564-cc9482dd5989 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:32:04 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:04.026701465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=982d7b36-7512-4db2-aa75-a7fac1e9a9ee name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:32:04 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:04.026785078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=982d7b36-7512-4db2-aa75-a7fac1e9a9ee name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:32:04 old-k8s-version-576210 crio[653]: time="2024-08-04 00:32:04.026819618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=982d7b36-7512-4db2-aa75-a7fac1e9a9ee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 4 00:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050227] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041126] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.789171] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.600311] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.566673] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.215618] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.062656] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049621] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.191384] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.139006] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.271189] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +6.294398] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.066429] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.776417] systemd-fstab-generator[966]: Ignoring "noauto" option for root device
	[Aug 4 00:15] kauditd_printk_skb: 46 callbacks suppressed
	[Aug 4 00:19] systemd-fstab-generator[5026]: Ignoring "noauto" option for root device
	[Aug 4 00:21] systemd-fstab-generator[5298]: Ignoring "noauto" option for root device
	[  +0.071111] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:32:04 up 17 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-576210 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]:         /usr/local/go/src/net/sock_posix.go:70 +0x1c5
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]: net.internetSocket(0x4f7fe40, 0xc000203d40, 0x48ab5d6, 0x3, 0x4fb9160, 0x0, 0x4fb9160, 0xc000bfee70, 0x1, 0x0, ...)
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]:         /usr/local/go/src/net/ipsock_posix.go:141 +0x145
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]: net.(*sysDialer).doDialTCP(0xc000700d80, 0x4f7fe40, 0xc000203d40, 0x0, 0xc000bfee70, 0x3fddce0, 0x70f9210, 0x0)
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]:         /usr/local/go/src/net/tcpsock_posix.go:65 +0xc5
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]: net.(*sysDialer).dialTCP(0xc000700d80, 0x4f7fe40, 0xc000203d40, 0x0, 0xc000bfee70, 0x57b620, 0x48ab5d6, 0x7f4056eb2070)
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]:         /usr/local/go/src/net/tcpsock_posix.go:61 +0xd7
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]: net.(*sysDialer).dialSingle(0xc000700d80, 0x4f7fe40, 0xc000203d40, 0x4f1ff00, 0xc000bfee70, 0x0, 0x0, 0x0, 0x0)
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]: net.(*sysDialer).dialSerial(0xc000700d80, 0x4f7fe40, 0xc000203d40, 0xc0002833d0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]:         /usr/local/go/src/net/dial.go:548 +0x152
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]: net.(*Dialer).DialContext(0xc0001d9c20, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc0008b0960, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0001b6200, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc0008b0960, 0x24, 0x60, 0x7f402d6ad6b0, 0x118, ...)
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]: net/http.(*Transport).dial(0xc000aac000, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc0008b0960, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]: net/http.(*Transport).dialConn(0xc000aac000, 0x4f7fe00, 0xc000122018, 0x0, 0xc0008fee40, 0x5, 0xc0008b0960, 0x24, 0x0, 0xc000878b40, ...)
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]: net/http.(*Transport).dialConnFor(0xc000aac000, 0xc000781ad0)
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]: created by net/http.(*Transport).queueForDial
	Aug 04 00:32:03 old-k8s-version-576210 kubelet[6463]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 04 00:32:03 old-k8s-version-576210 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 04 00:32:03 old-k8s-version-576210 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-576210 -n old-k8s-version-576210
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 2 (225.27567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-576210" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (502.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-04 00:37:31.328742401 +0000 UTC m=+6587.270986446
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-969068 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-969068 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.134µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-969068 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-969068 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-969068 logs -n 25: (1.561348702s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-576210             | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-118016                  | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-969068       | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:33 UTC | 04 Aug 24 00:33 UTC |
	| start   | -p newest-cni-836281 --memory=2200 --alsologtostderr   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:33 UTC | 04 Aug 24 00:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-836281             | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:34 UTC | 04 Aug 24 00:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:34 UTC | 04 Aug 24 00:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-836281                  | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:34 UTC | 04 Aug 24 00:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-836281 --memory=2200 --alsologtostderr   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:34 UTC | 04 Aug 24 00:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-836281 image list                           | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	| delete  | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	| start   | -p auto-159277 --memory=3072                           | auto-159277                  | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:37 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	| start   | -p kindnet-159277                                      | kindnet-159277               | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:37 UTC |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	| start   | -p calico-159277 --memory=3072                         | calico-159277                | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p auto-159277 pgrep -a                                | auto-159277                  | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p kindnet-159277 pgrep -a                             | kindnet-159277               | jenkins | v1.33.1 | 04 Aug 24 00:37 UTC | 04 Aug 24 00:37 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:35:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:35:53.684379   73844 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:35:53.684515   73844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:35:53.684526   73844 out.go:304] Setting ErrFile to fd 2...
	I0804 00:35:53.684533   73844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:35:53.684859   73844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:35:53.685681   73844 out.go:298] Setting JSON to false
	I0804 00:35:53.687088   73844 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8298,"bootTime":1722723456,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:35:53.687168   73844 start.go:139] virtualization: kvm guest
	I0804 00:35:53.689592   73844 out.go:177] * [calico-159277] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:35:53.691241   73844 notify.go:220] Checking for updates...
	I0804 00:35:53.691246   73844 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:35:53.692888   73844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:35:53.694450   73844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:35:53.696030   73844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:35:53.697489   73844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:35:53.698791   73844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:35:53.700494   73844 config.go:182] Loaded profile config "auto-159277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:35:53.700613   73844 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:35:53.700707   73844 config.go:182] Loaded profile config "kindnet-159277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:35:53.700804   73844 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:35:53.737680   73844 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 00:35:53.739107   73844 start.go:297] selected driver: kvm2
	I0804 00:35:53.739127   73844 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:35:53.739143   73844 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:35:53.740150   73844 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:35:53.740258   73844 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:35:53.756603   73844 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:35:53.756655   73844 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:35:53.756866   73844 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:35:53.756890   73844 cni.go:84] Creating CNI manager for "calico"
	I0804 00:35:53.756901   73844 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0804 00:35:53.756948   73844 start.go:340] cluster config:
	{Name:calico-159277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:35:53.757035   73844 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:35:53.758882   73844 out.go:177] * Starting "calico-159277" primary control-plane node in "calico-159277" cluster
	I0804 00:35:50.976960   73669 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:35:50.977000   73669 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:35:50.977007   73669 cache.go:56] Caching tarball of preloaded images
	I0804 00:35:50.977073   73669 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:35:50.977083   73669 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:35:50.977178   73669 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/config.json ...
	I0804 00:35:50.977195   73669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/config.json: {Name:mk8f97447bdb05d5fa4f0a01b938c474684977c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:35:50.977331   73669 start.go:360] acquireMachinesLock for kindnet-159277: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:35:52.486926   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:35:52.487374   73264 main.go:141] libmachine: (auto-159277) DBG | unable to find current IP address of domain auto-159277 in network mk-auto-159277
	I0804 00:35:52.487402   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:52.487319   73287 retry.go:31] will retry after 1.821019738s: waiting for machine to come up
	I0804 00:35:54.310215   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:35:54.310979   73264 main.go:141] libmachine: (auto-159277) DBG | unable to find current IP address of domain auto-159277 in network mk-auto-159277
	I0804 00:35:54.311011   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:54.310897   73287 retry.go:31] will retry after 3.49563533s: waiting for machine to come up
	I0804 00:35:53.760257   73844 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:35:53.760318   73844 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:35:53.760332   73844 cache.go:56] Caching tarball of preloaded images
	I0804 00:35:53.760432   73844 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:35:53.760447   73844 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:35:53.760566   73844 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/config.json ...
	I0804 00:35:53.760590   73844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/config.json: {Name:mk58fdde30899806db1379dd743cab27052314a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:35:53.760794   73844 start.go:360] acquireMachinesLock for calico-159277: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:35:57.807902   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:35:57.808377   73264 main.go:141] libmachine: (auto-159277) DBG | unable to find current IP address of domain auto-159277 in network mk-auto-159277
	I0804 00:35:57.808399   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:57.808360   73287 retry.go:31] will retry after 3.926900016s: waiting for machine to come up
	I0804 00:36:01.739579   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:01.740163   73264 main.go:141] libmachine: (auto-159277) DBG | unable to find current IP address of domain auto-159277 in network mk-auto-159277
	I0804 00:36:01.740192   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:36:01.740119   73287 retry.go:31] will retry after 3.72592248s: waiting for machine to come up
	I0804 00:36:05.469926   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:05.470524   73264 main.go:141] libmachine: (auto-159277) Found IP for machine: 192.168.72.144
	I0804 00:36:05.470546   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has current primary IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:05.470567   73264 main.go:141] libmachine: (auto-159277) Reserving static IP address...
	I0804 00:36:05.471013   73264 main.go:141] libmachine: (auto-159277) DBG | unable to find host DHCP lease matching {name: "auto-159277", mac: "52:54:00:99:56:51", ip: "192.168.72.144"} in network mk-auto-159277
	I0804 00:36:05.548074   73264 main.go:141] libmachine: (auto-159277) Reserved static IP address: 192.168.72.144
	I0804 00:36:05.548103   73264 main.go:141] libmachine: (auto-159277) Waiting for SSH to be available...
	I0804 00:36:05.548139   73264 main.go:141] libmachine: (auto-159277) DBG | Getting to WaitForSSH function...
	I0804 00:36:05.550716   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:05.551120   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:56:51}
	I0804 00:36:05.551149   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:05.551309   73264 main.go:141] libmachine: (auto-159277) DBG | Using SSH client type: external
	I0804 00:36:05.551334   73264 main.go:141] libmachine: (auto-159277) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277/id_rsa (-rw-------)
	I0804 00:36:05.551371   73264 main.go:141] libmachine: (auto-159277) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:36:05.551384   73264 main.go:141] libmachine: (auto-159277) DBG | About to run SSH command:
	I0804 00:36:05.551399   73264 main.go:141] libmachine: (auto-159277) DBG | exit 0
	I0804 00:36:05.681981   73264 main.go:141] libmachine: (auto-159277) DBG | SSH cmd err, output: <nil>: 
	I0804 00:36:05.682264   73264 main.go:141] libmachine: (auto-159277) KVM machine creation complete!
	I0804 00:36:05.682585   73264 main.go:141] libmachine: (auto-159277) Calling .GetConfigRaw
	I0804 00:36:05.683127   73264 main.go:141] libmachine: (auto-159277) Calling .DriverName
	I0804 00:36:05.683316   73264 main.go:141] libmachine: (auto-159277) Calling .DriverName
	I0804 00:36:05.683465   73264 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 00:36:05.683479   73264 main.go:141] libmachine: (auto-159277) Calling .GetState
	I0804 00:36:05.684670   73264 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 00:36:05.684686   73264 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 00:36:05.684694   73264 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 00:36:05.684702   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:05.686896   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:05.687248   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:05.687273   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:05.687518   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHPort
	I0804 00:36:05.687717   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:05.687851   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:05.687974   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHUsername
	I0804 00:36:05.688130   73264 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:05.688370   73264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0804 00:36:05.688389   73264 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 00:36:05.796958   73264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:36:05.796984   73264 main.go:141] libmachine: Detecting the provisioner...
	I0804 00:36:05.796995   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:05.799824   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:05.800144   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:05.800169   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:05.800347   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHPort
	I0804 00:36:05.800525   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:05.800673   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:05.800798   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHUsername
	I0804 00:36:05.800935   73264 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:05.801122   73264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0804 00:36:05.801133   73264 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 00:36:05.914364   73264 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 00:36:05.914447   73264 main.go:141] libmachine: found compatible host: buildroot
	I0804 00:36:05.914456   73264 main.go:141] libmachine: Provisioning with buildroot...
	I0804 00:36:05.914464   73264 main.go:141] libmachine: (auto-159277) Calling .GetMachineName
	I0804 00:36:05.914741   73264 buildroot.go:166] provisioning hostname "auto-159277"
	I0804 00:36:05.914770   73264 main.go:141] libmachine: (auto-159277) Calling .GetMachineName
	I0804 00:36:05.914968   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:05.917648   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:05.918028   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:05.918054   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:05.918225   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHPort
	I0804 00:36:05.918380   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:05.918508   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:05.918652   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHUsername
	I0804 00:36:05.918800   73264 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:05.918985   73264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0804 00:36:05.918996   73264 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-159277 && echo "auto-159277" | sudo tee /etc/hostname
	I0804 00:36:06.046593   73264 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-159277
	
	I0804 00:36:06.046629   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:06.049326   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.049732   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:06.049761   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.049906   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHPort
	I0804 00:36:06.050130   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:06.050314   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:06.050454   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHUsername
	I0804 00:36:06.050637   73264 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:06.050853   73264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0804 00:36:06.050875   73264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-159277' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-159277/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-159277' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:36:06.171506   73264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:36:06.171562   73264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:36:06.171591   73264 buildroot.go:174] setting up certificates
	I0804 00:36:06.171603   73264 provision.go:84] configureAuth start
	I0804 00:36:06.171620   73264 main.go:141] libmachine: (auto-159277) Calling .GetMachineName
	I0804 00:36:06.171938   73264 main.go:141] libmachine: (auto-159277) Calling .GetIP
	I0804 00:36:06.174748   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.175180   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:06.175202   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.175410   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:06.177948   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.178288   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:06.178307   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.178437   73264 provision.go:143] copyHostCerts
	I0804 00:36:06.178514   73264 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:36:06.178525   73264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:36:06.178593   73264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:36:06.178696   73264 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:36:06.178704   73264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:36:06.178729   73264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:36:06.178792   73264 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:36:06.178804   73264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:36:06.178825   73264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:36:06.178883   73264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.auto-159277 san=[127.0.0.1 192.168.72.144 auto-159277 localhost minikube]
	I0804 00:36:06.418103   73264 provision.go:177] copyRemoteCerts
	I0804 00:36:06.418159   73264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:36:06.418181   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:06.420775   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.421117   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:06.421150   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.421286   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHPort
	I0804 00:36:06.421489   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:06.421630   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHUsername
	I0804 00:36:06.421772   73264 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277/id_rsa Username:docker}
	I0804 00:36:06.508192   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:36:06.533685   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0804 00:36:06.558735   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:36:06.583608   73264 provision.go:87] duration metric: took 411.988998ms to configureAuth
	I0804 00:36:06.583645   73264 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:36:06.583811   73264 config.go:182] Loaded profile config "auto-159277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:36:06.583896   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:06.586564   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.586912   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:06.586942   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.587092   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHPort
	I0804 00:36:06.587303   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:06.587438   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:06.587547   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHUsername
	I0804 00:36:06.587674   73264 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:06.587842   73264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0804 00:36:06.587864   73264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:36:06.875946   73264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:36:06.875971   73264 main.go:141] libmachine: Checking connection to Docker...
	I0804 00:36:06.875980   73264 main.go:141] libmachine: (auto-159277) Calling .GetURL
	I0804 00:36:06.877153   73264 main.go:141] libmachine: (auto-159277) DBG | Using libvirt version 6000000
	I0804 00:36:06.879502   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.879856   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:06.879883   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.880062   73264 main.go:141] libmachine: Docker is up and running!
	I0804 00:36:06.880078   73264 main.go:141] libmachine: Reticulating splines...
	I0804 00:36:06.880085   73264 client.go:171] duration metric: took 24.775107632s to LocalClient.Create
	I0804 00:36:06.880109   73264 start.go:167] duration metric: took 24.775175644s to libmachine.API.Create "auto-159277"
	I0804 00:36:06.880117   73264 start.go:293] postStartSetup for "auto-159277" (driver="kvm2")
	I0804 00:36:06.880128   73264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:36:06.880143   73264 main.go:141] libmachine: (auto-159277) Calling .DriverName
	I0804 00:36:06.880356   73264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:36:06.880375   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:06.882521   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.882848   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:06.882871   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:06.883022   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHPort
	I0804 00:36:06.883197   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:06.883402   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHUsername
	I0804 00:36:06.883605   73264 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277/id_rsa Username:docker}
	I0804 00:36:06.968687   73264 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:36:06.973346   73264 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:36:06.973395   73264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:36:06.973461   73264 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:36:06.973531   73264 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:36:06.973620   73264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:36:06.983297   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:36:07.010399   73264 start.go:296] duration metric: took 130.269797ms for postStartSetup
	I0804 00:36:07.010445   73264 main.go:141] libmachine: (auto-159277) Calling .GetConfigRaw
	I0804 00:36:07.011061   73264 main.go:141] libmachine: (auto-159277) Calling .GetIP
	I0804 00:36:07.130325   73669 start.go:364] duration metric: took 16.152964855s to acquireMachinesLock for "kindnet-159277"
	I0804 00:36:07.130403   73669 start.go:93] Provisioning new machine with config: &{Name:kindnet-159277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:kindnet-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:36:07.130526   73669 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 00:36:07.013826   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:07.014138   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:07.014173   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:07.014412   73264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/config.json ...
	I0804 00:36:07.014574   73264 start.go:128] duration metric: took 24.927639251s to createHost
	I0804 00:36:07.014601   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:07.017080   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:07.017396   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:07.017425   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:07.017585   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHPort
	I0804 00:36:07.017784   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:07.017961   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:07.018092   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHUsername
	I0804 00:36:07.018240   73264 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:07.018402   73264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.144 22 <nil> <nil>}
	I0804 00:36:07.018412   73264 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:36:07.130171   73264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731767.106105150
	
	I0804 00:36:07.130192   73264 fix.go:216] guest clock: 1722731767.106105150
	I0804 00:36:07.130200   73264 fix.go:229] Guest: 2024-08-04 00:36:07.10610515 +0000 UTC Remote: 2024-08-04 00:36:07.014586805 +0000 UTC m=+25.037688158 (delta=91.518345ms)
	I0804 00:36:07.130221   73264 fix.go:200] guest clock delta is within tolerance: 91.518345ms
	I0804 00:36:07.130226   73264 start.go:83] releasing machines lock for "auto-159277", held for 25.043372818s
	I0804 00:36:07.130254   73264 main.go:141] libmachine: (auto-159277) Calling .DriverName
	I0804 00:36:07.130529   73264 main.go:141] libmachine: (auto-159277) Calling .GetIP
	I0804 00:36:07.133647   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:07.134087   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:07.134117   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:07.134251   73264 main.go:141] libmachine: (auto-159277) Calling .DriverName
	I0804 00:36:07.134790   73264 main.go:141] libmachine: (auto-159277) Calling .DriverName
	I0804 00:36:07.134977   73264 main.go:141] libmachine: (auto-159277) Calling .DriverName
	I0804 00:36:07.135056   73264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:36:07.135092   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:07.135295   73264 ssh_runner.go:195] Run: cat /version.json
	I0804 00:36:07.135319   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:07.137864   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:07.138049   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:07.138209   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:07.138242   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:07.138438   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHPort
	I0804 00:36:07.138519   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:07.138536   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:07.138583   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:07.138728   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHUsername
	I0804 00:36:07.138804   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHPort
	I0804 00:36:07.138926   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:07.139002   73264 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277/id_rsa Username:docker}
	I0804 00:36:07.139033   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHUsername
	I0804 00:36:07.139319   73264 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277/id_rsa Username:docker}
	I0804 00:36:07.243460   73264 ssh_runner.go:195] Run: systemctl --version
	I0804 00:36:07.250364   73264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:36:07.413536   73264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:36:07.419797   73264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:36:07.419884   73264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:36:07.437130   73264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:36:07.437153   73264 start.go:495] detecting cgroup driver to use...
	I0804 00:36:07.437206   73264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:36:07.455172   73264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:36:07.470879   73264 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:36:07.470944   73264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:36:07.485969   73264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:36:07.500755   73264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:36:07.622488   73264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:36:07.799819   73264 docker.go:233] disabling docker service ...
	I0804 00:36:07.799904   73264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:36:07.814865   73264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:36:07.827771   73264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:36:07.943346   73264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:36:08.061765   73264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:36:08.076567   73264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:36:08.095895   73264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:36:08.095971   73264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:08.106923   73264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:36:08.106982   73264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:08.118337   73264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:08.129956   73264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:08.147197   73264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:36:08.159343   73264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:08.170756   73264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:08.192551   73264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:08.203986   73264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:36:08.214531   73264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:36:08.214589   73264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:36:08.229525   73264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:36:08.240034   73264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:36:08.373674   73264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:36:08.535415   73264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:36:08.535474   73264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:36:08.540514   73264 start.go:563] Will wait 60s for crictl version
	I0804 00:36:08.540567   73264 ssh_runner.go:195] Run: which crictl
	I0804 00:36:08.544691   73264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:36:08.588538   73264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:36:08.588645   73264 ssh_runner.go:195] Run: crio --version
	I0804 00:36:08.618303   73264 ssh_runner.go:195] Run: crio --version
	I0804 00:36:08.656913   73264 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:36:07.132935   73669 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0804 00:36:07.133112   73669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:36:07.133168   73669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:36:07.153528   73669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41383
	I0804 00:36:07.154005   73669 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:36:07.154653   73669 main.go:141] libmachine: Using API Version  1
	I0804 00:36:07.154679   73669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:36:07.155045   73669 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:36:07.155263   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetMachineName
	I0804 00:36:07.155436   73669 main.go:141] libmachine: (kindnet-159277) Calling .DriverName
	I0804 00:36:07.155596   73669 start.go:159] libmachine.API.Create for "kindnet-159277" (driver="kvm2")
	I0804 00:36:07.155625   73669 client.go:168] LocalClient.Create starting
	I0804 00:36:07.155658   73669 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0804 00:36:07.155698   73669 main.go:141] libmachine: Decoding PEM data...
	I0804 00:36:07.155723   73669 main.go:141] libmachine: Parsing certificate...
	I0804 00:36:07.155800   73669 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0804 00:36:07.155823   73669 main.go:141] libmachine: Decoding PEM data...
	I0804 00:36:07.155842   73669 main.go:141] libmachine: Parsing certificate...
	I0804 00:36:07.155865   73669 main.go:141] libmachine: Running pre-create checks...
	I0804 00:36:07.155876   73669 main.go:141] libmachine: (kindnet-159277) Calling .PreCreateCheck
	I0804 00:36:07.156262   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetConfigRaw
	I0804 00:36:07.156704   73669 main.go:141] libmachine: Creating machine...
	I0804 00:36:07.156719   73669 main.go:141] libmachine: (kindnet-159277) Calling .Create
	I0804 00:36:07.156873   73669 main.go:141] libmachine: (kindnet-159277) Creating KVM machine...
	I0804 00:36:07.158282   73669 main.go:141] libmachine: (kindnet-159277) DBG | found existing default KVM network
	I0804 00:36:07.159379   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:07.159212   73986 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b7:12:58} reservation:<nil>}
	I0804 00:36:07.160212   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:07.160138   73986 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002645f0}
	I0804 00:36:07.160233   73669 main.go:141] libmachine: (kindnet-159277) DBG | created network xml: 
	I0804 00:36:07.160250   73669 main.go:141] libmachine: (kindnet-159277) DBG | <network>
	I0804 00:36:07.160258   73669 main.go:141] libmachine: (kindnet-159277) DBG |   <name>mk-kindnet-159277</name>
	I0804 00:36:07.160266   73669 main.go:141] libmachine: (kindnet-159277) DBG |   <dns enable='no'/>
	I0804 00:36:07.160274   73669 main.go:141] libmachine: (kindnet-159277) DBG |   
	I0804 00:36:07.160288   73669 main.go:141] libmachine: (kindnet-159277) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0804 00:36:07.160305   73669 main.go:141] libmachine: (kindnet-159277) DBG |     <dhcp>
	I0804 00:36:07.160315   73669 main.go:141] libmachine: (kindnet-159277) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0804 00:36:07.160328   73669 main.go:141] libmachine: (kindnet-159277) DBG |     </dhcp>
	I0804 00:36:07.160341   73669 main.go:141] libmachine: (kindnet-159277) DBG |   </ip>
	I0804 00:36:07.160347   73669 main.go:141] libmachine: (kindnet-159277) DBG |   
	I0804 00:36:07.160360   73669 main.go:141] libmachine: (kindnet-159277) DBG | </network>
	I0804 00:36:07.160370   73669 main.go:141] libmachine: (kindnet-159277) DBG | 
	I0804 00:36:07.166105   73669 main.go:141] libmachine: (kindnet-159277) DBG | trying to create private KVM network mk-kindnet-159277 192.168.50.0/24...
	I0804 00:36:07.236047   73669 main.go:141] libmachine: (kindnet-159277) DBG | private KVM network mk-kindnet-159277 192.168.50.0/24 created
	I0804 00:36:07.236080   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:07.236005   73986 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:36:07.236108   73669 main.go:141] libmachine: (kindnet-159277) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277 ...
	I0804 00:36:07.236137   73669 main.go:141] libmachine: (kindnet-159277) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:36:07.236215   73669 main.go:141] libmachine: (kindnet-159277) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 00:36:07.487422   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:07.487320   73986 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277/id_rsa...
	I0804 00:36:07.689601   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:07.689482   73986 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277/kindnet-159277.rawdisk...
	I0804 00:36:07.689641   73669 main.go:141] libmachine: (kindnet-159277) DBG | Writing magic tar header
	I0804 00:36:07.689655   73669 main.go:141] libmachine: (kindnet-159277) DBG | Writing SSH key tar header
	I0804 00:36:07.689668   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:07.689598   73986 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277 ...
	I0804 00:36:07.689715   73669 main.go:141] libmachine: (kindnet-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277
	I0804 00:36:07.689821   73669 main.go:141] libmachine: (kindnet-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0804 00:36:07.689846   73669 main.go:141] libmachine: (kindnet-159277) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277 (perms=drwx------)
	I0804 00:36:07.689858   73669 main.go:141] libmachine: (kindnet-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:36:07.689907   73669 main.go:141] libmachine: (kindnet-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0804 00:36:07.689935   73669 main.go:141] libmachine: (kindnet-159277) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0804 00:36:07.689950   73669 main.go:141] libmachine: (kindnet-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 00:36:07.689966   73669 main.go:141] libmachine: (kindnet-159277) DBG | Checking permissions on dir: /home/jenkins
	I0804 00:36:07.689976   73669 main.go:141] libmachine: (kindnet-159277) DBG | Checking permissions on dir: /home
	I0804 00:36:07.689993   73669 main.go:141] libmachine: (kindnet-159277) DBG | Skipping /home - not owner
	I0804 00:36:07.690020   73669 main.go:141] libmachine: (kindnet-159277) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0804 00:36:07.690087   73669 main.go:141] libmachine: (kindnet-159277) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0804 00:36:07.690107   73669 main.go:141] libmachine: (kindnet-159277) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 00:36:07.690118   73669 main.go:141] libmachine: (kindnet-159277) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 00:36:07.690133   73669 main.go:141] libmachine: (kindnet-159277) Creating domain...
	I0804 00:36:07.691000   73669 main.go:141] libmachine: (kindnet-159277) define libvirt domain using xml: 
	I0804 00:36:07.691023   73669 main.go:141] libmachine: (kindnet-159277) <domain type='kvm'>
	I0804 00:36:07.691064   73669 main.go:141] libmachine: (kindnet-159277)   <name>kindnet-159277</name>
	I0804 00:36:07.691087   73669 main.go:141] libmachine: (kindnet-159277)   <memory unit='MiB'>3072</memory>
	I0804 00:36:07.691096   73669 main.go:141] libmachine: (kindnet-159277)   <vcpu>2</vcpu>
	I0804 00:36:07.691106   73669 main.go:141] libmachine: (kindnet-159277)   <features>
	I0804 00:36:07.691113   73669 main.go:141] libmachine: (kindnet-159277)     <acpi/>
	I0804 00:36:07.691122   73669 main.go:141] libmachine: (kindnet-159277)     <apic/>
	I0804 00:36:07.691132   73669 main.go:141] libmachine: (kindnet-159277)     <pae/>
	I0804 00:36:07.691142   73669 main.go:141] libmachine: (kindnet-159277)     
	I0804 00:36:07.691150   73669 main.go:141] libmachine: (kindnet-159277)   </features>
	I0804 00:36:07.691166   73669 main.go:141] libmachine: (kindnet-159277)   <cpu mode='host-passthrough'>
	I0804 00:36:07.691176   73669 main.go:141] libmachine: (kindnet-159277)   
	I0804 00:36:07.691184   73669 main.go:141] libmachine: (kindnet-159277)   </cpu>
	I0804 00:36:07.691194   73669 main.go:141] libmachine: (kindnet-159277)   <os>
	I0804 00:36:07.691202   73669 main.go:141] libmachine: (kindnet-159277)     <type>hvm</type>
	I0804 00:36:07.691212   73669 main.go:141] libmachine: (kindnet-159277)     <boot dev='cdrom'/>
	I0804 00:36:07.691217   73669 main.go:141] libmachine: (kindnet-159277)     <boot dev='hd'/>
	I0804 00:36:07.691231   73669 main.go:141] libmachine: (kindnet-159277)     <bootmenu enable='no'/>
	I0804 00:36:07.691243   73669 main.go:141] libmachine: (kindnet-159277)   </os>
	I0804 00:36:07.691254   73669 main.go:141] libmachine: (kindnet-159277)   <devices>
	I0804 00:36:07.691264   73669 main.go:141] libmachine: (kindnet-159277)     <disk type='file' device='cdrom'>
	I0804 00:36:07.691277   73669 main.go:141] libmachine: (kindnet-159277)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277/boot2docker.iso'/>
	I0804 00:36:07.691287   73669 main.go:141] libmachine: (kindnet-159277)       <target dev='hdc' bus='scsi'/>
	I0804 00:36:07.691295   73669 main.go:141] libmachine: (kindnet-159277)       <readonly/>
	I0804 00:36:07.691303   73669 main.go:141] libmachine: (kindnet-159277)     </disk>
	I0804 00:36:07.691322   73669 main.go:141] libmachine: (kindnet-159277)     <disk type='file' device='disk'>
	I0804 00:36:07.691344   73669 main.go:141] libmachine: (kindnet-159277)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 00:36:07.691380   73669 main.go:141] libmachine: (kindnet-159277)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277/kindnet-159277.rawdisk'/>
	I0804 00:36:07.691392   73669 main.go:141] libmachine: (kindnet-159277)       <target dev='hda' bus='virtio'/>
	I0804 00:36:07.691400   73669 main.go:141] libmachine: (kindnet-159277)     </disk>
	I0804 00:36:07.691405   73669 main.go:141] libmachine: (kindnet-159277)     <interface type='network'>
	I0804 00:36:07.691413   73669 main.go:141] libmachine: (kindnet-159277)       <source network='mk-kindnet-159277'/>
	I0804 00:36:07.691428   73669 main.go:141] libmachine: (kindnet-159277)       <model type='virtio'/>
	I0804 00:36:07.691441   73669 main.go:141] libmachine: (kindnet-159277)     </interface>
	I0804 00:36:07.691451   73669 main.go:141] libmachine: (kindnet-159277)     <interface type='network'>
	I0804 00:36:07.691460   73669 main.go:141] libmachine: (kindnet-159277)       <source network='default'/>
	I0804 00:36:07.691470   73669 main.go:141] libmachine: (kindnet-159277)       <model type='virtio'/>
	I0804 00:36:07.691478   73669 main.go:141] libmachine: (kindnet-159277)     </interface>
	I0804 00:36:07.691487   73669 main.go:141] libmachine: (kindnet-159277)     <serial type='pty'>
	I0804 00:36:07.691496   73669 main.go:141] libmachine: (kindnet-159277)       <target port='0'/>
	I0804 00:36:07.691510   73669 main.go:141] libmachine: (kindnet-159277)     </serial>
	I0804 00:36:07.691521   73669 main.go:141] libmachine: (kindnet-159277)     <console type='pty'>
	I0804 00:36:07.691530   73669 main.go:141] libmachine: (kindnet-159277)       <target type='serial' port='0'/>
	I0804 00:36:07.691542   73669 main.go:141] libmachine: (kindnet-159277)     </console>
	I0804 00:36:07.691552   73669 main.go:141] libmachine: (kindnet-159277)     <rng model='virtio'>
	I0804 00:36:07.691561   73669 main.go:141] libmachine: (kindnet-159277)       <backend model='random'>/dev/random</backend>
	I0804 00:36:07.691571   73669 main.go:141] libmachine: (kindnet-159277)     </rng>
	I0804 00:36:07.691588   73669 main.go:141] libmachine: (kindnet-159277)     
	I0804 00:36:07.691605   73669 main.go:141] libmachine: (kindnet-159277)     
	I0804 00:36:07.691616   73669 main.go:141] libmachine: (kindnet-159277)   </devices>
	I0804 00:36:07.691627   73669 main.go:141] libmachine: (kindnet-159277) </domain>
	I0804 00:36:07.691649   73669 main.go:141] libmachine: (kindnet-159277) 
	I0804 00:36:07.695976   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:ae:29:30 in network default
	I0804 00:36:07.696569   73669 main.go:141] libmachine: (kindnet-159277) Ensuring networks are active...
	I0804 00:36:07.696609   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:07.697325   73669 main.go:141] libmachine: (kindnet-159277) Ensuring network default is active
	I0804 00:36:07.697766   73669 main.go:141] libmachine: (kindnet-159277) Ensuring network mk-kindnet-159277 is active
	I0804 00:36:07.698237   73669 main.go:141] libmachine: (kindnet-159277) Getting domain xml...
	I0804 00:36:07.699069   73669 main.go:141] libmachine: (kindnet-159277) Creating domain...
	I0804 00:36:09.019636   73669 main.go:141] libmachine: (kindnet-159277) Waiting to get IP...
	I0804 00:36:09.020735   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:09.021266   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:09.021449   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:09.021335   73986 retry.go:31] will retry after 268.843524ms: waiting for machine to come up
	I0804 00:36:09.292184   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:09.292787   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:09.292823   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:09.292720   73986 retry.go:31] will retry after 361.935943ms: waiting for machine to come up
	I0804 00:36:09.656262   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:09.656841   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:09.656873   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:09.656812   73986 retry.go:31] will retry after 337.608674ms: waiting for machine to come up
	I0804 00:36:09.996640   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:09.997155   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:09.997201   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:09.997111   73986 retry.go:31] will retry after 521.345381ms: waiting for machine to come up
	I0804 00:36:10.519787   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:10.520446   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:10.520478   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:10.520382   73986 retry.go:31] will retry after 699.231815ms: waiting for machine to come up
	I0804 00:36:08.658555   73264 main.go:141] libmachine: (auto-159277) Calling .GetIP
	I0804 00:36:08.663782   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:08.664399   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:08.664428   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:08.664755   73264 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0804 00:36:08.669916   73264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:36:08.684573   73264 kubeadm.go:883] updating cluster {Name:auto-159277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:auto-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:36:08.684736   73264 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:36:08.684812   73264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:36:08.721620   73264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:36:08.721690   73264 ssh_runner.go:195] Run: which lz4
	I0804 00:36:08.725972   73264 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:36:08.730476   73264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:36:08.730508   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:36:10.314217   73264 crio.go:462] duration metric: took 1.58827066s to copy over tarball
	I0804 00:36:10.314320   73264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:36:11.221369   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:11.221996   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:11.222030   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:11.221926   73986 retry.go:31] will retry after 836.800688ms: waiting for machine to come up
	I0804 00:36:12.060438   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:12.060952   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:12.060982   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:12.060886   73986 retry.go:31] will retry after 1.044249704s: waiting for machine to come up
	I0804 00:36:13.106581   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:13.107050   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:13.107080   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:13.107010   73986 retry.go:31] will retry after 1.397429851s: waiting for machine to come up
	I0804 00:36:14.506571   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:14.507023   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:14.507046   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:14.506975   73986 retry.go:31] will retry after 1.33253319s: waiting for machine to come up
	I0804 00:36:15.841096   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:15.841595   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:15.841613   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:15.841567   73986 retry.go:31] will retry after 1.832392096s: waiting for machine to come up
	I0804 00:36:12.811203   73264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.496849996s)
	I0804 00:36:12.811238   73264 crio.go:469] duration metric: took 2.496990151s to extract the tarball
	I0804 00:36:12.811247   73264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:36:12.855581   73264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:36:12.901160   73264 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:36:12.901183   73264 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:36:12.901192   73264 kubeadm.go:934] updating node { 192.168.72.144 8443 v1.30.3 crio true true} ...
	I0804 00:36:12.901305   73264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-159277 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:auto-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:36:12.901393   73264 ssh_runner.go:195] Run: crio config
	I0804 00:36:12.957088   73264 cni.go:84] Creating CNI manager for ""
	I0804 00:36:12.957110   73264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:36:12.957119   73264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:36:12.957140   73264 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-159277 NodeName:auto-159277 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:36:12.957262   73264 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-159277"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:36:12.957317   73264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:36:12.968068   73264 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:36:12.968147   73264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:36:12.977719   73264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0804 00:36:12.994982   73264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:36:13.011730   73264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0804 00:36:13.028799   73264 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I0804 00:36:13.032814   73264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:36:13.045924   73264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:36:13.166285   73264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:36:13.184954   73264 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277 for IP: 192.168.72.144
	I0804 00:36:13.184982   73264 certs.go:194] generating shared ca certs ...
	I0804 00:36:13.185002   73264 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:13.185179   73264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:36:13.185248   73264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:36:13.185263   73264 certs.go:256] generating profile certs ...
	I0804 00:36:13.185320   73264 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/client.key
	I0804 00:36:13.185335   73264 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/client.crt with IP's: []
	I0804 00:36:13.248949   73264 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/client.crt ...
	I0804 00:36:13.248978   73264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/client.crt: {Name:mk31ef69d0b76e26f8638c99ae90269e1d9a09ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:13.249170   73264 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/client.key ...
	I0804 00:36:13.249184   73264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/client.key: {Name:mk77316b09a50427c25f72d4470f5153eee5fcba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:13.249287   73264 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/apiserver.key.f81b4962
	I0804 00:36:13.249303   73264 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/apiserver.crt.f81b4962 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.144]
	I0804 00:36:13.424394   73264 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/apiserver.crt.f81b4962 ...
	I0804 00:36:13.424423   73264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/apiserver.crt.f81b4962: {Name:mkfc8a6a99c6962bbdd24a9ecec773317dd49548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:13.424604   73264 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/apiserver.key.f81b4962 ...
	I0804 00:36:13.424620   73264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/apiserver.key.f81b4962: {Name:mk3042c3e43883002cd58e46023f0718b6c46f2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:13.424736   73264 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/apiserver.crt.f81b4962 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/apiserver.crt
	I0804 00:36:13.424842   73264 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/apiserver.key.f81b4962 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/apiserver.key
	I0804 00:36:13.424905   73264 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/proxy-client.key
	I0804 00:36:13.424922   73264 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/proxy-client.crt with IP's: []
	I0804 00:36:13.769621   73264 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/proxy-client.crt ...
	I0804 00:36:13.769649   73264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/proxy-client.crt: {Name:mkcff638f4d4ea591980cb949aff7dab8cb3282a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:13.769836   73264 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/proxy-client.key ...
	I0804 00:36:13.769852   73264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/proxy-client.key: {Name:mkfac8abce7e141731841cbbf65c938e8c2fb2e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:13.770079   73264 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:36:13.770132   73264 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:36:13.770144   73264 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:36:13.770164   73264 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:36:13.770201   73264 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:36:13.770224   73264 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:36:13.770263   73264 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:36:13.770828   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:36:13.802922   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:36:13.830721   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:36:13.856340   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:36:13.885410   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0804 00:36:13.911702   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 00:36:13.935324   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:36:13.962492   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:36:13.988523   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:36:14.014979   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:36:14.040549   73264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:36:14.066801   73264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:36:14.086120   73264 ssh_runner.go:195] Run: openssl version
	I0804 00:36:14.092277   73264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:36:14.104251   73264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:36:14.109095   73264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:36:14.109172   73264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:36:14.115286   73264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:36:14.127261   73264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:36:14.139385   73264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:36:14.144214   73264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:36:14.144291   73264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:36:14.150429   73264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:36:14.163056   73264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:36:14.175279   73264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:36:14.179906   73264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:36:14.179968   73264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:36:14.186010   73264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:36:14.197560   73264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:36:14.201711   73264 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 00:36:14.201770   73264 kubeadm.go:392] StartCluster: {Name:auto-159277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:auto-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:36:14.201845   73264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:36:14.201912   73264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:36:14.239070   73264 cri.go:89] found id: ""
	I0804 00:36:14.239151   73264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:36:14.251763   73264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:36:14.262898   73264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:36:14.273516   73264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:36:14.273548   73264 kubeadm.go:157] found existing configuration files:
	
	I0804 00:36:14.273604   73264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:36:14.285004   73264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:36:14.285076   73264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:36:14.296646   73264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:36:14.308390   73264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:36:14.308457   73264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:36:14.319886   73264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:36:14.330668   73264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:36:14.330737   73264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:36:14.342515   73264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:36:14.354321   73264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:36:14.354391   73264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:36:14.366958   73264 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:36:14.588046   73264 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:36:17.676168   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:17.676631   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:17.676652   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:17.676596   73986 retry.go:31] will retry after 1.777761909s: waiting for machine to come up
	I0804 00:36:19.456769   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:19.457277   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:19.457306   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:19.457250   73986 retry.go:31] will retry after 3.499698533s: waiting for machine to come up
	I0804 00:36:25.137086   73264 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0804 00:36:25.137196   73264 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:36:25.137305   73264 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:36:25.137454   73264 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:36:25.137577   73264 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:36:25.137687   73264 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:36:25.139437   73264 out.go:204]   - Generating certificates and keys ...
	I0804 00:36:25.139534   73264 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:36:25.139631   73264 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:36:25.139724   73264 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 00:36:25.139800   73264 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 00:36:25.139880   73264 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 00:36:25.139949   73264 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 00:36:25.140034   73264 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 00:36:25.140196   73264 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-159277 localhost] and IPs [192.168.72.144 127.0.0.1 ::1]
	I0804 00:36:25.140289   73264 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 00:36:25.140506   73264 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-159277 localhost] and IPs [192.168.72.144 127.0.0.1 ::1]
	I0804 00:36:25.140604   73264 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 00:36:25.140707   73264 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 00:36:25.140785   73264 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 00:36:25.140878   73264 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:36:25.140943   73264 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:36:25.141015   73264 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 00:36:25.141081   73264 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:36:25.141165   73264 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:36:25.141212   73264 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:36:25.141278   73264 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:36:25.141332   73264 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:36:25.142870   73264 out.go:204]   - Booting up control plane ...
	I0804 00:36:25.142977   73264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:36:25.143086   73264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:36:25.143177   73264 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:36:25.143368   73264 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:36:25.143483   73264 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:36:25.143534   73264 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:36:25.143696   73264 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 00:36:25.143757   73264 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0804 00:36:25.143813   73264 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.533372ms
	I0804 00:36:25.143879   73264 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 00:36:25.143933   73264 kubeadm.go:310] [api-check] The API server is healthy after 5.506091986s
	I0804 00:36:25.144021   73264 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 00:36:25.144129   73264 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 00:36:25.144219   73264 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 00:36:25.144432   73264 kubeadm.go:310] [mark-control-plane] Marking the node auto-159277 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 00:36:25.144492   73264 kubeadm.go:310] [bootstrap-token] Using token: 8q4p73.z63himo905nlq9ll
	I0804 00:36:25.145887   73264 out.go:204]   - Configuring RBAC rules ...
	I0804 00:36:25.146001   73264 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 00:36:25.146133   73264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 00:36:25.146285   73264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 00:36:25.146452   73264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 00:36:25.146588   73264 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 00:36:25.146667   73264 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 00:36:25.146811   73264 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 00:36:25.146868   73264 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 00:36:25.146927   73264 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 00:36:25.146936   73264 kubeadm.go:310] 
	I0804 00:36:25.147009   73264 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 00:36:25.147024   73264 kubeadm.go:310] 
	I0804 00:36:25.147140   73264 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 00:36:25.147151   73264 kubeadm.go:310] 
	I0804 00:36:25.147197   73264 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 00:36:25.147270   73264 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 00:36:25.147331   73264 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 00:36:25.147340   73264 kubeadm.go:310] 
	I0804 00:36:25.147410   73264 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 00:36:25.147419   73264 kubeadm.go:310] 
	I0804 00:36:25.147484   73264 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 00:36:25.147492   73264 kubeadm.go:310] 
	I0804 00:36:25.147564   73264 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 00:36:25.147669   73264 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 00:36:25.147777   73264 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 00:36:25.147786   73264 kubeadm.go:310] 
	I0804 00:36:25.147869   73264 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 00:36:25.147940   73264 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 00:36:25.147947   73264 kubeadm.go:310] 
	I0804 00:36:25.148017   73264 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8q4p73.z63himo905nlq9ll \
	I0804 00:36:25.148113   73264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0804 00:36:25.148133   73264 kubeadm.go:310] 	--control-plane 
	I0804 00:36:25.148155   73264 kubeadm.go:310] 
	I0804 00:36:25.148234   73264 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 00:36:25.148240   73264 kubeadm.go:310] 
	I0804 00:36:25.148305   73264 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8q4p73.z63himo905nlq9ll \
	I0804 00:36:25.148418   73264 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0804 00:36:25.148435   73264 cni.go:84] Creating CNI manager for ""
	I0804 00:36:25.148444   73264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:36:25.150012   73264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:36:22.958327   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:22.958831   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:22.958859   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:22.958778   73986 retry.go:31] will retry after 3.606356527s: waiting for machine to come up
	I0804 00:36:25.151274   73264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:36:25.164305   73264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:36:25.183090   73264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:36:25.183201   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:25.183209   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-159277 minikube.k8s.io/updated_at=2024_08_04T00_36_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=auto-159277 minikube.k8s.io/primary=true
	I0804 00:36:25.219395   73264 ops.go:34] apiserver oom_adj: -16
	I0804 00:36:25.328311   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:25.828621   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:26.329421   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:26.828884   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:26.569465   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:26.569902   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find current IP address of domain kindnet-159277 in network mk-kindnet-159277
	I0804 00:36:26.569932   73669 main.go:141] libmachine: (kindnet-159277) DBG | I0804 00:36:26.569851   73986 retry.go:31] will retry after 5.561220951s: waiting for machine to come up
	I0804 00:36:27.328747   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:27.828797   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:28.329123   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:28.828738   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:29.328853   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:29.829305   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:30.328566   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:30.828459   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:31.328446   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:31.828786   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:33.670443   73844 start.go:364] duration metric: took 39.909591448s to acquireMachinesLock for "calico-159277"
	I0804 00:36:33.670501   73844 start.go:93] Provisioning new machine with config: &{Name:calico-159277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:calico-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:36:33.670606   73844 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 00:36:33.674028   73844 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0804 00:36:33.674218   73844 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:36:33.674271   73844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:36:32.134388   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.134925   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has current primary IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.134954   73669 main.go:141] libmachine: (kindnet-159277) Found IP for machine: 192.168.50.99
	I0804 00:36:32.134964   73669 main.go:141] libmachine: (kindnet-159277) Reserving static IP address...
	I0804 00:36:32.135377   73669 main.go:141] libmachine: (kindnet-159277) DBG | unable to find host DHCP lease matching {name: "kindnet-159277", mac: "52:54:00:8f:cb:f3", ip: "192.168.50.99"} in network mk-kindnet-159277
	I0804 00:36:32.213760   73669 main.go:141] libmachine: (kindnet-159277) DBG | Getting to WaitForSSH function...
	I0804 00:36:32.213787   73669 main.go:141] libmachine: (kindnet-159277) Reserved static IP address: 192.168.50.99
	I0804 00:36:32.213800   73669 main.go:141] libmachine: (kindnet-159277) Waiting for SSH to be available...
	I0804 00:36:32.216683   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.217111   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:32.217142   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.217304   73669 main.go:141] libmachine: (kindnet-159277) DBG | Using SSH client type: external
	I0804 00:36:32.217337   73669 main.go:141] libmachine: (kindnet-159277) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277/id_rsa (-rw-------)
	I0804 00:36:32.217381   73669 main.go:141] libmachine: (kindnet-159277) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:36:32.217401   73669 main.go:141] libmachine: (kindnet-159277) DBG | About to run SSH command:
	I0804 00:36:32.217413   73669 main.go:141] libmachine: (kindnet-159277) DBG | exit 0
	I0804 00:36:32.346658   73669 main.go:141] libmachine: (kindnet-159277) DBG | SSH cmd err, output: <nil>: 
	I0804 00:36:32.346940   73669 main.go:141] libmachine: (kindnet-159277) KVM machine creation complete!
	I0804 00:36:32.347301   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetConfigRaw
	I0804 00:36:32.347872   73669 main.go:141] libmachine: (kindnet-159277) Calling .DriverName
	I0804 00:36:32.348100   73669 main.go:141] libmachine: (kindnet-159277) Calling .DriverName
	I0804 00:36:32.348265   73669 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 00:36:32.348278   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetState
	I0804 00:36:32.349613   73669 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 00:36:32.349628   73669 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 00:36:32.349636   73669 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 00:36:32.349643   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:36:32.352232   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.352636   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:32.352675   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.352833   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHPort
	I0804 00:36:32.353029   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:32.353210   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:32.353371   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHUsername
	I0804 00:36:32.353563   73669 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:32.353746   73669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0804 00:36:32.353760   73669 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 00:36:32.460785   73669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:36:32.460814   73669 main.go:141] libmachine: Detecting the provisioner...
	I0804 00:36:32.460824   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:36:32.463690   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.464097   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:32.464122   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.464305   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHPort
	I0804 00:36:32.464505   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:32.464689   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:32.464842   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHUsername
	I0804 00:36:32.465037   73669 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:32.465205   73669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0804 00:36:32.465214   73669 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 00:36:32.574425   73669 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 00:36:32.574516   73669 main.go:141] libmachine: found compatible host: buildroot
	I0804 00:36:32.574530   73669 main.go:141] libmachine: Provisioning with buildroot...
	I0804 00:36:32.574542   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetMachineName
	I0804 00:36:32.574835   73669 buildroot.go:166] provisioning hostname "kindnet-159277"
	I0804 00:36:32.574859   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetMachineName
	I0804 00:36:32.575061   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:36:32.578137   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.578531   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:32.578561   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.578662   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHPort
	I0804 00:36:32.578858   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:32.579017   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:32.579177   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHUsername
	I0804 00:36:32.579377   73669 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:32.579547   73669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0804 00:36:32.579559   73669 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-159277 && echo "kindnet-159277" | sudo tee /etc/hostname
	I0804 00:36:32.701040   73669 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-159277
	
	I0804 00:36:32.701072   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:36:32.704230   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.704617   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:32.704650   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.704783   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHPort
	I0804 00:36:32.704975   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:32.705152   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:32.705331   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHUsername
	I0804 00:36:32.705550   73669 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:32.705738   73669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0804 00:36:32.705755   73669 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-159277' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-159277/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-159277' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:36:32.818790   73669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:36:32.818819   73669 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:36:32.818868   73669 buildroot.go:174] setting up certificates
	I0804 00:36:32.818878   73669 provision.go:84] configureAuth start
	I0804 00:36:32.818887   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetMachineName
	I0804 00:36:32.819178   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetIP
	I0804 00:36:32.821950   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.822342   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:32.822367   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.822466   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:36:32.824642   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.824961   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:32.824997   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.825090   73669 provision.go:143] copyHostCerts
	I0804 00:36:32.825173   73669 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:36:32.825185   73669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:36:32.825250   73669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:36:32.825386   73669 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:36:32.825397   73669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:36:32.825433   73669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:36:32.825521   73669 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:36:32.825530   73669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:36:32.825558   73669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:36:32.825629   73669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.kindnet-159277 san=[127.0.0.1 192.168.50.99 kindnet-159277 localhost minikube]
	I0804 00:36:32.986276   73669 provision.go:177] copyRemoteCerts
	I0804 00:36:32.986339   73669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:36:32.986367   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:36:32.989105   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.989406   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:32.989437   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:32.989688   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHPort
	I0804 00:36:32.989884   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:32.990072   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHUsername
	I0804 00:36:32.990189   73669 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277/id_rsa Username:docker}
	I0804 00:36:33.072508   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:36:33.097081   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0804 00:36:33.121338   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:36:33.145320   73669 provision.go:87] duration metric: took 326.431825ms to configureAuth
	I0804 00:36:33.145348   73669 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:36:33.145543   73669 config.go:182] Loaded profile config "kindnet-159277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:36:33.145621   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:36:33.148539   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.149013   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:33.149044   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.149238   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHPort
	I0804 00:36:33.149449   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:33.149607   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:33.149732   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHUsername
	I0804 00:36:33.149892   73669 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:33.150045   73669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0804 00:36:33.150063   73669 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:36:33.424245   73669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:36:33.424270   73669 main.go:141] libmachine: Checking connection to Docker...
	I0804 00:36:33.424277   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetURL
	I0804 00:36:33.425655   73669 main.go:141] libmachine: (kindnet-159277) DBG | Using libvirt version 6000000
	I0804 00:36:33.428287   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.428636   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:33.428682   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.428858   73669 main.go:141] libmachine: Docker is up and running!
	I0804 00:36:33.428872   73669 main.go:141] libmachine: Reticulating splines...
	I0804 00:36:33.428879   73669 client.go:171] duration metric: took 26.273247507s to LocalClient.Create
	I0804 00:36:33.428901   73669 start.go:167] duration metric: took 26.273316077s to libmachine.API.Create "kindnet-159277"
	I0804 00:36:33.428909   73669 start.go:293] postStartSetup for "kindnet-159277" (driver="kvm2")
	I0804 00:36:33.428919   73669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:36:33.428935   73669 main.go:141] libmachine: (kindnet-159277) Calling .DriverName
	I0804 00:36:33.429178   73669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:36:33.429201   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:36:33.431419   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.431764   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:33.431789   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.431910   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHPort
	I0804 00:36:33.432100   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:33.432242   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHUsername
	I0804 00:36:33.432419   73669 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277/id_rsa Username:docker}
	I0804 00:36:33.517538   73669 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:36:33.521901   73669 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:36:33.521923   73669 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:36:33.521990   73669 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:36:33.522083   73669 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:36:33.522205   73669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:36:33.532758   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:36:33.558303   73669 start.go:296] duration metric: took 129.381557ms for postStartSetup
	I0804 00:36:33.558358   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetConfigRaw
	I0804 00:36:33.559024   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetIP
	I0804 00:36:33.561752   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.562129   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:33.562158   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.562336   73669 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/config.json ...
	I0804 00:36:33.562524   73669 start.go:128] duration metric: took 26.431986331s to createHost
	I0804 00:36:33.562549   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:36:33.564699   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.565061   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:33.565087   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.565224   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHPort
	I0804 00:36:33.565410   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:33.565565   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:33.565707   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHUsername
	I0804 00:36:33.565864   73669 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:33.566067   73669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0804 00:36:33.566080   73669 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:36:33.670263   73669 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731793.639624087
	
	I0804 00:36:33.670290   73669 fix.go:216] guest clock: 1722731793.639624087
	I0804 00:36:33.670300   73669 fix.go:229] Guest: 2024-08-04 00:36:33.639624087 +0000 UTC Remote: 2024-08-04 00:36:33.562535633 +0000 UTC m=+42.712364752 (delta=77.088454ms)
	I0804 00:36:33.670344   73669 fix.go:200] guest clock delta is within tolerance: 77.088454ms
	I0804 00:36:33.670351   73669 start.go:83] releasing machines lock for "kindnet-159277", held for 26.539989005s
	I0804 00:36:33.670379   73669 main.go:141] libmachine: (kindnet-159277) Calling .DriverName
	I0804 00:36:33.670666   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetIP
	I0804 00:36:33.673747   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.674153   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:33.674179   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.674406   73669 main.go:141] libmachine: (kindnet-159277) Calling .DriverName
	I0804 00:36:33.674964   73669 main.go:141] libmachine: (kindnet-159277) Calling .DriverName
	I0804 00:36:33.675145   73669 main.go:141] libmachine: (kindnet-159277) Calling .DriverName
	I0804 00:36:33.675227   73669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:36:33.675258   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:36:33.675336   73669 ssh_runner.go:195] Run: cat /version.json
	I0804 00:36:33.675359   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:36:33.678976   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.679040   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.679419   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:33.679449   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.679490   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:33.679509   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:33.679611   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHPort
	I0804 00:36:33.679812   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:33.679821   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHPort
	I0804 00:36:33.679967   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:36:33.679974   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHUsername
	I0804 00:36:33.680114   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHUsername
	I0804 00:36:33.680190   73669 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277/id_rsa Username:docker}
	I0804 00:36:33.680248   73669 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277/id_rsa Username:docker}
	I0804 00:36:33.760078   73669 ssh_runner.go:195] Run: systemctl --version
	I0804 00:36:33.788000   73669 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:36:33.955943   73669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:36:33.963412   73669 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:36:33.963486   73669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:36:33.981335   73669 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:36:33.981387   73669 start.go:495] detecting cgroup driver to use...
	I0804 00:36:33.981455   73669 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:36:33.999017   73669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:36:34.014263   73669 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:36:34.014329   73669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:36:34.029042   73669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:36:34.044184   73669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:36:34.167070   73669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:36:34.343236   73669 docker.go:233] disabling docker service ...
	I0804 00:36:34.343319   73669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:36:34.361158   73669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:36:34.377045   73669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:36:34.525700   73669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:36:34.656523   73669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:36:34.674441   73669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:36:34.695948   73669 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:36:34.696009   73669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:34.707366   73669 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:36:34.707428   73669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:34.718988   73669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:34.729737   73669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:34.740391   73669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:36:34.754581   73669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:34.766433   73669 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:34.784763   73669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:36:34.796203   73669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:36:34.806049   73669 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:36:34.806115   73669 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:36:34.820748   73669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:36:34.836949   73669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:36:34.986608   73669 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:36:35.154866   73669 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:36:35.154934   73669 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:36:35.160412   73669 start.go:563] Will wait 60s for crictl version
	I0804 00:36:35.160481   73669 ssh_runner.go:195] Run: which crictl
	I0804 00:36:35.164641   73669 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:36:35.215692   73669 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:36:35.215760   73669 ssh_runner.go:195] Run: crio --version
	I0804 00:36:35.250441   73669 ssh_runner.go:195] Run: crio --version
	I0804 00:36:35.285489   73669 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:36:35.286726   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetIP
	I0804 00:36:35.290477   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:35.291029   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:36:35.291078   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:36:35.291287   73669 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0804 00:36:35.297038   73669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:36:35.313521   73669 kubeadm.go:883] updating cluster {Name:kindnet-159277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:kindnet-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:36:35.313631   73669 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:36:35.313694   73669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:36:35.360977   73669 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:36:35.361066   73669 ssh_runner.go:195] Run: which lz4
	I0804 00:36:35.365896   73669 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:36:35.372254   73669 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:36:35.372302   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:36:32.328688   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:32.828750   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:33.328838   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:33.828554   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:34.328503   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:34.829075   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:35.329011   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:35.828393   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:36.328548   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:36.828839   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:33.694096   73844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0804 00:36:33.694504   73844 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:36:33.695123   73844 main.go:141] libmachine: Using API Version  1
	I0804 00:36:33.695146   73844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:36:33.695548   73844 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:36:33.695788   73844 main.go:141] libmachine: (calico-159277) Calling .GetMachineName
	I0804 00:36:33.695931   73844 main.go:141] libmachine: (calico-159277) Calling .DriverName
	I0804 00:36:33.696108   73844 start.go:159] libmachine.API.Create for "calico-159277" (driver="kvm2")
	I0804 00:36:33.696139   73844 client.go:168] LocalClient.Create starting
	I0804 00:36:33.696179   73844 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0804 00:36:33.696220   73844 main.go:141] libmachine: Decoding PEM data...
	I0804 00:36:33.696242   73844 main.go:141] libmachine: Parsing certificate...
	I0804 00:36:33.696316   73844 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0804 00:36:33.696345   73844 main.go:141] libmachine: Decoding PEM data...
	I0804 00:36:33.696368   73844 main.go:141] libmachine: Parsing certificate...
	I0804 00:36:33.696394   73844 main.go:141] libmachine: Running pre-create checks...
	I0804 00:36:33.696409   73844 main.go:141] libmachine: (calico-159277) Calling .PreCreateCheck
	I0804 00:36:33.696835   73844 main.go:141] libmachine: (calico-159277) Calling .GetConfigRaw
	I0804 00:36:33.697287   73844 main.go:141] libmachine: Creating machine...
	I0804 00:36:33.697298   73844 main.go:141] libmachine: (calico-159277) Calling .Create
	I0804 00:36:33.697465   73844 main.go:141] libmachine: (calico-159277) Creating KVM machine...
	I0804 00:36:33.698821   73844 main.go:141] libmachine: (calico-159277) DBG | found existing default KVM network
	I0804 00:36:33.700159   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:33.699969   74238 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b7:12:58} reservation:<nil>}
	I0804 00:36:33.701248   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:33.701159   74238 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3c:65:de} reservation:<nil>}
	I0804 00:36:33.702472   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:33.702386   74238 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000287240}
	I0804 00:36:33.702498   73844 main.go:141] libmachine: (calico-159277) DBG | created network xml: 
	I0804 00:36:33.702507   73844 main.go:141] libmachine: (calico-159277) DBG | <network>
	I0804 00:36:33.702517   73844 main.go:141] libmachine: (calico-159277) DBG |   <name>mk-calico-159277</name>
	I0804 00:36:33.702524   73844 main.go:141] libmachine: (calico-159277) DBG |   <dns enable='no'/>
	I0804 00:36:33.702532   73844 main.go:141] libmachine: (calico-159277) DBG |   
	I0804 00:36:33.702542   73844 main.go:141] libmachine: (calico-159277) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0804 00:36:33.702550   73844 main.go:141] libmachine: (calico-159277) DBG |     <dhcp>
	I0804 00:36:33.702561   73844 main.go:141] libmachine: (calico-159277) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0804 00:36:33.702577   73844 main.go:141] libmachine: (calico-159277) DBG |     </dhcp>
	I0804 00:36:33.702585   73844 main.go:141] libmachine: (calico-159277) DBG |   </ip>
	I0804 00:36:33.702596   73844 main.go:141] libmachine: (calico-159277) DBG |   
	I0804 00:36:33.702603   73844 main.go:141] libmachine: (calico-159277) DBG | </network>
	I0804 00:36:33.702612   73844 main.go:141] libmachine: (calico-159277) DBG | 
	I0804 00:36:33.708573   73844 main.go:141] libmachine: (calico-159277) DBG | trying to create private KVM network mk-calico-159277 192.168.61.0/24...
	I0804 00:36:33.788685   73844 main.go:141] libmachine: (calico-159277) DBG | private KVM network mk-calico-159277 192.168.61.0/24 created
	I0804 00:36:33.788729   73844 main.go:141] libmachine: (calico-159277) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277 ...
	I0804 00:36:33.788747   73844 main.go:141] libmachine: (calico-159277) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:36:33.788760   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:33.788671   74238 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:36:33.788794   73844 main.go:141] libmachine: (calico-159277) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 00:36:34.063932   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:34.063778   74238 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277/id_rsa...
	I0804 00:36:34.150545   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:34.150434   74238 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277/calico-159277.rawdisk...
	I0804 00:36:34.150570   73844 main.go:141] libmachine: (calico-159277) DBG | Writing magic tar header
	I0804 00:36:34.150580   73844 main.go:141] libmachine: (calico-159277) DBG | Writing SSH key tar header
	I0804 00:36:34.150658   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:34.150611   74238 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277 ...
	I0804 00:36:34.150961   73844 main.go:141] libmachine: (calico-159277) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277 (perms=drwx------)
	I0804 00:36:34.150988   73844 main.go:141] libmachine: (calico-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277
	I0804 00:36:34.151000   73844 main.go:141] libmachine: (calico-159277) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0804 00:36:34.151014   73844 main.go:141] libmachine: (calico-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0804 00:36:34.151038   73844 main.go:141] libmachine: (calico-159277) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0804 00:36:34.151051   73844 main.go:141] libmachine: (calico-159277) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0804 00:36:34.151061   73844 main.go:141] libmachine: (calico-159277) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 00:36:34.151071   73844 main.go:141] libmachine: (calico-159277) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 00:36:34.151088   73844 main.go:141] libmachine: (calico-159277) Creating domain...
	I0804 00:36:34.151107   73844 main.go:141] libmachine: (calico-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:36:34.151120   73844 main.go:141] libmachine: (calico-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0804 00:36:34.151132   73844 main.go:141] libmachine: (calico-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 00:36:34.151144   73844 main.go:141] libmachine: (calico-159277) DBG | Checking permissions on dir: /home/jenkins
	I0804 00:36:34.151150   73844 main.go:141] libmachine: (calico-159277) DBG | Checking permissions on dir: /home
	I0804 00:36:34.151164   73844 main.go:141] libmachine: (calico-159277) DBG | Skipping /home - not owner
	I0804 00:36:34.152551   73844 main.go:141] libmachine: (calico-159277) define libvirt domain using xml: 
	I0804 00:36:34.152570   73844 main.go:141] libmachine: (calico-159277) <domain type='kvm'>
	I0804 00:36:34.152579   73844 main.go:141] libmachine: (calico-159277)   <name>calico-159277</name>
	I0804 00:36:34.152586   73844 main.go:141] libmachine: (calico-159277)   <memory unit='MiB'>3072</memory>
	I0804 00:36:34.152594   73844 main.go:141] libmachine: (calico-159277)   <vcpu>2</vcpu>
	I0804 00:36:34.152600   73844 main.go:141] libmachine: (calico-159277)   <features>
	I0804 00:36:34.152607   73844 main.go:141] libmachine: (calico-159277)     <acpi/>
	I0804 00:36:34.152629   73844 main.go:141] libmachine: (calico-159277)     <apic/>
	I0804 00:36:34.152638   73844 main.go:141] libmachine: (calico-159277)     <pae/>
	I0804 00:36:34.152643   73844 main.go:141] libmachine: (calico-159277)     
	I0804 00:36:34.152651   73844 main.go:141] libmachine: (calico-159277)   </features>
	I0804 00:36:34.152669   73844 main.go:141] libmachine: (calico-159277)   <cpu mode='host-passthrough'>
	I0804 00:36:34.152677   73844 main.go:141] libmachine: (calico-159277)   
	I0804 00:36:34.152687   73844 main.go:141] libmachine: (calico-159277)   </cpu>
	I0804 00:36:34.152695   73844 main.go:141] libmachine: (calico-159277)   <os>
	I0804 00:36:34.152708   73844 main.go:141] libmachine: (calico-159277)     <type>hvm</type>
	I0804 00:36:34.152790   73844 main.go:141] libmachine: (calico-159277)     <boot dev='cdrom'/>
	I0804 00:36:34.152832   73844 main.go:141] libmachine: (calico-159277)     <boot dev='hd'/>
	I0804 00:36:34.152845   73844 main.go:141] libmachine: (calico-159277)     <bootmenu enable='no'/>
	I0804 00:36:34.152853   73844 main.go:141] libmachine: (calico-159277)   </os>
	I0804 00:36:34.152861   73844 main.go:141] libmachine: (calico-159277)   <devices>
	I0804 00:36:34.152870   73844 main.go:141] libmachine: (calico-159277)     <disk type='file' device='cdrom'>
	I0804 00:36:34.152884   73844 main.go:141] libmachine: (calico-159277)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277/boot2docker.iso'/>
	I0804 00:36:34.152891   73844 main.go:141] libmachine: (calico-159277)       <target dev='hdc' bus='scsi'/>
	I0804 00:36:34.152918   73844 main.go:141] libmachine: (calico-159277)       <readonly/>
	I0804 00:36:34.152943   73844 main.go:141] libmachine: (calico-159277)     </disk>
	I0804 00:36:34.152956   73844 main.go:141] libmachine: (calico-159277)     <disk type='file' device='disk'>
	I0804 00:36:34.152969   73844 main.go:141] libmachine: (calico-159277)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 00:36:34.152981   73844 main.go:141] libmachine: (calico-159277)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277/calico-159277.rawdisk'/>
	I0804 00:36:34.152991   73844 main.go:141] libmachine: (calico-159277)       <target dev='hda' bus='virtio'/>
	I0804 00:36:34.152996   73844 main.go:141] libmachine: (calico-159277)     </disk>
	I0804 00:36:34.153004   73844 main.go:141] libmachine: (calico-159277)     <interface type='network'>
	I0804 00:36:34.153011   73844 main.go:141] libmachine: (calico-159277)       <source network='mk-calico-159277'/>
	I0804 00:36:34.153022   73844 main.go:141] libmachine: (calico-159277)       <model type='virtio'/>
	I0804 00:36:34.153039   73844 main.go:141] libmachine: (calico-159277)     </interface>
	I0804 00:36:34.153051   73844 main.go:141] libmachine: (calico-159277)     <interface type='network'>
	I0804 00:36:34.153060   73844 main.go:141] libmachine: (calico-159277)       <source network='default'/>
	I0804 00:36:34.153071   73844 main.go:141] libmachine: (calico-159277)       <model type='virtio'/>
	I0804 00:36:34.153079   73844 main.go:141] libmachine: (calico-159277)     </interface>
	I0804 00:36:34.153091   73844 main.go:141] libmachine: (calico-159277)     <serial type='pty'>
	I0804 00:36:34.153102   73844 main.go:141] libmachine: (calico-159277)       <target port='0'/>
	I0804 00:36:34.153116   73844 main.go:141] libmachine: (calico-159277)     </serial>
	I0804 00:36:34.153129   73844 main.go:141] libmachine: (calico-159277)     <console type='pty'>
	I0804 00:36:34.153137   73844 main.go:141] libmachine: (calico-159277)       <target type='serial' port='0'/>
	I0804 00:36:34.153148   73844 main.go:141] libmachine: (calico-159277)     </console>
	I0804 00:36:34.153159   73844 main.go:141] libmachine: (calico-159277)     <rng model='virtio'>
	I0804 00:36:34.153172   73844 main.go:141] libmachine: (calico-159277)       <backend model='random'>/dev/random</backend>
	I0804 00:36:34.153180   73844 main.go:141] libmachine: (calico-159277)     </rng>
	I0804 00:36:34.153185   73844 main.go:141] libmachine: (calico-159277)     
	I0804 00:36:34.153206   73844 main.go:141] libmachine: (calico-159277)     
	I0804 00:36:34.153228   73844 main.go:141] libmachine: (calico-159277)   </devices>
	I0804 00:36:34.153239   73844 main.go:141] libmachine: (calico-159277) </domain>
	I0804 00:36:34.153264   73844 main.go:141] libmachine: (calico-159277) 
	I0804 00:36:34.160920   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:82:15:aa in network default
	I0804 00:36:34.161677   73844 main.go:141] libmachine: (calico-159277) Ensuring networks are active...
	I0804 00:36:34.161703   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:34.162440   73844 main.go:141] libmachine: (calico-159277) Ensuring network default is active
	I0804 00:36:34.162780   73844 main.go:141] libmachine: (calico-159277) Ensuring network mk-calico-159277 is active
	I0804 00:36:34.163503   73844 main.go:141] libmachine: (calico-159277) Getting domain xml...
	I0804 00:36:34.164352   73844 main.go:141] libmachine: (calico-159277) Creating domain...
	I0804 00:36:35.534693   73844 main.go:141] libmachine: (calico-159277) Waiting to get IP...
	I0804 00:36:35.535766   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:35.536279   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:35.536357   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:35.536270   74238 retry.go:31] will retry after 264.659047ms: waiting for machine to come up
	I0804 00:36:35.802854   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:35.803544   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:35.803578   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:35.803481   74238 retry.go:31] will retry after 326.214937ms: waiting for machine to come up
	I0804 00:36:36.131039   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:36.131642   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:36.131680   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:36.131598   74238 retry.go:31] will retry after 419.443096ms: waiting for machine to come up
	I0804 00:36:36.552593   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:36.553633   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:36.553723   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:36.553677   74238 retry.go:31] will retry after 455.143386ms: waiting for machine to come up
	I0804 00:36:37.010043   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:37.010500   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:37.010531   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:37.010472   74238 retry.go:31] will retry after 735.829494ms: waiting for machine to come up
	I0804 00:36:37.748739   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:37.749349   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:37.749393   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:37.749253   74238 retry.go:31] will retry after 584.495718ms: waiting for machine to come up
	I0804 00:36:38.335651   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:38.336090   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:38.336121   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:38.336039   74238 retry.go:31] will retry after 752.863304ms: waiting for machine to come up
	I0804 00:36:37.328702   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:37.829189   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:38.328795   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:38.829391   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:39.518371   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:40.125067   73264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:40.277227   73264 kubeadm.go:1113] duration metric: took 15.094119147s to wait for elevateKubeSystemPrivileges
	I0804 00:36:40.277258   73264 kubeadm.go:394] duration metric: took 26.07549788s to StartCluster
	I0804 00:36:40.277275   73264 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:40.277364   73264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:36:40.278600   73264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:40.278840   73264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0804 00:36:40.278849   73264 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:36:40.278934   73264 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:36:40.279012   73264 config.go:182] Loaded profile config "auto-159277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:36:40.279019   73264 addons.go:69] Setting storage-provisioner=true in profile "auto-159277"
	I0804 00:36:40.279070   73264 addons.go:234] Setting addon storage-provisioner=true in "auto-159277"
	I0804 00:36:40.279069   73264 addons.go:69] Setting default-storageclass=true in profile "auto-159277"
	I0804 00:36:40.279103   73264 host.go:66] Checking if "auto-159277" exists ...
	I0804 00:36:40.279110   73264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-159277"
	I0804 00:36:40.279464   73264 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:36:40.279484   73264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:36:40.279525   73264 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:36:40.279548   73264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:36:40.280441   73264 out.go:177] * Verifying Kubernetes components...
	I0804 00:36:40.281920   73264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:36:40.300432   73264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0804 00:36:40.300641   73264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0804 00:36:40.300986   73264 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:36:40.301034   73264 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:36:40.301492   73264 main.go:141] libmachine: Using API Version  1
	I0804 00:36:40.301512   73264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:36:40.301495   73264 main.go:141] libmachine: Using API Version  1
	I0804 00:36:40.301532   73264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:36:40.301876   73264 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:36:40.301933   73264 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:36:40.302676   73264 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:36:40.302714   73264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:36:40.302929   73264 main.go:141] libmachine: (auto-159277) Calling .GetState
	I0804 00:36:40.306972   73264 addons.go:234] Setting addon default-storageclass=true in "auto-159277"
	I0804 00:36:40.307014   73264 host.go:66] Checking if "auto-159277" exists ...
	I0804 00:36:40.307377   73264 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:36:40.307412   73264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:36:40.322503   73264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36743
	I0804 00:36:40.323088   73264 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:36:40.323644   73264 main.go:141] libmachine: Using API Version  1
	I0804 00:36:40.323663   73264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:36:40.324055   73264 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:36:40.324634   73264 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:36:40.324681   73264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:36:40.325801   73264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40989
	I0804 00:36:40.329864   73264 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:36:40.330434   73264 main.go:141] libmachine: Using API Version  1
	I0804 00:36:40.330453   73264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:36:40.330814   73264 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:36:40.331010   73264 main.go:141] libmachine: (auto-159277) Calling .GetState
	I0804 00:36:40.333084   73264 main.go:141] libmachine: (auto-159277) Calling .DriverName
	I0804 00:36:40.335062   73264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:36:36.940523   73669 crio.go:462] duration metric: took 1.574688847s to copy over tarball
	I0804 00:36:36.940602   73669 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:36:39.596823   73669 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.656176344s)
	I0804 00:36:39.596863   73669 crio.go:469] duration metric: took 2.656311985s to extract the tarball
	I0804 00:36:39.596874   73669 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:36:39.636853   73669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:36:39.685679   73669 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:36:39.685710   73669 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:36:39.685721   73669 kubeadm.go:934] updating node { 192.168.50.99 8443 v1.30.3 crio true true} ...
	I0804 00:36:39.685865   73669 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-159277 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:kindnet-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0804 00:36:39.685955   73669 ssh_runner.go:195] Run: crio config
	I0804 00:36:39.735861   73669 cni.go:84] Creating CNI manager for "kindnet"
	I0804 00:36:39.735893   73669 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:36:39.735931   73669 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.99 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-159277 NodeName:kindnet-159277 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:36:39.736239   73669 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-159277"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:36:39.736318   73669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:36:39.747006   73669 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:36:39.747098   73669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:36:39.756827   73669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0804 00:36:39.777868   73669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:36:39.800133   73669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0804 00:36:39.820980   73669 ssh_runner.go:195] Run: grep 192.168.50.99	control-plane.minikube.internal$ /etc/hosts
	I0804 00:36:39.825582   73669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.99	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:36:39.839691   73669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:36:39.975075   73669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:36:39.994446   73669 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277 for IP: 192.168.50.99
	I0804 00:36:39.994467   73669 certs.go:194] generating shared ca certs ...
	I0804 00:36:39.994481   73669 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:39.994639   73669 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:36:39.994698   73669 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:36:39.994711   73669 certs.go:256] generating profile certs ...
	I0804 00:36:39.994768   73669 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/client.key
	I0804 00:36:39.994787   73669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/client.crt with IP's: []
	I0804 00:36:40.163166   73669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/client.crt ...
	I0804 00:36:40.163195   73669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/client.crt: {Name:mkdde1a16ccdc680b90ca08432fb6b8289e1b209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:40.163453   73669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/client.key ...
	I0804 00:36:40.163471   73669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/client.key: {Name:mk3f59d3bf6f975bfc7c5417d96bbcef9a12685b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:40.163595   73669 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/apiserver.key.d38abca0
	I0804 00:36:40.163618   73669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/apiserver.crt.d38abca0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.99]
	I0804 00:36:40.299554   73669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/apiserver.crt.d38abca0 ...
	I0804 00:36:40.299594   73669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/apiserver.crt.d38abca0: {Name:mk3e95e9c22e5c2681fb8c2eef8a2536dd69aadb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:40.299805   73669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/apiserver.key.d38abca0 ...
	I0804 00:36:40.299836   73669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/apiserver.key.d38abca0: {Name:mk4a1c91710ab22dc0bf885b3c933ca12cca7ee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:40.299967   73669 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/apiserver.crt.d38abca0 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/apiserver.crt
	I0804 00:36:40.300095   73669 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/apiserver.key.d38abca0 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/apiserver.key
	I0804 00:36:40.300182   73669 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/proxy-client.key
	I0804 00:36:40.300205   73669 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/proxy-client.crt with IP's: []
	I0804 00:36:40.369740   73669 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/proxy-client.crt ...
	I0804 00:36:40.369768   73669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/proxy-client.crt: {Name:mkcf3a963d7d783932c3cbf07a22a3c39091c4fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:40.369953   73669 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/proxy-client.key ...
	I0804 00:36:40.369967   73669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/proxy-client.key: {Name:mk86b2c98951608aa44b6ec1defc6ac5c5025346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:36:40.370199   73669 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:36:40.370254   73669 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:36:40.370267   73669 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:36:40.370292   73669 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:36:40.370326   73669 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:36:40.370357   73669 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:36:40.370421   73669 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:36:40.371200   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:36:40.408811   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:36:40.442624   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:36:40.472197   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:36:40.506439   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0804 00:36:40.542716   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 00:36:40.576373   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:36:40.605805   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/kindnet-159277/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:36:40.637821   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:36:40.669533   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:36:40.707922   73669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:36:40.751529   73669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:36:40.782336   73669 ssh_runner.go:195] Run: openssl version
	I0804 00:36:40.788583   73669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:36:40.802079   73669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:36:40.808569   73669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:36:40.808639   73669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:36:40.815612   73669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:36:40.829419   73669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:36:40.842116   73669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:36:40.847730   73669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:36:40.847808   73669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:36:40.854316   73669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:36:40.867814   73669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:36:40.881678   73669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:36:40.887271   73669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:36:40.887345   73669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:36:40.893806   73669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:36:40.336540   73264 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:36:40.336560   73264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:36:40.336578   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:40.339742   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:40.340155   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:40.340204   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:40.340467   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHPort
	I0804 00:36:40.340673   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:40.340859   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHUsername
	I0804 00:36:40.341016   73264 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277/id_rsa Username:docker}
	I0804 00:36:40.344272   73264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34893
	I0804 00:36:40.344666   73264 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:36:40.345083   73264 main.go:141] libmachine: Using API Version  1
	I0804 00:36:40.345099   73264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:36:40.345444   73264 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:36:40.345624   73264 main.go:141] libmachine: (auto-159277) Calling .GetState
	I0804 00:36:40.346987   73264 main.go:141] libmachine: (auto-159277) Calling .DriverName
	I0804 00:36:40.347268   73264 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:36:40.347286   73264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:36:40.347304   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHHostname
	I0804 00:36:40.349967   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:40.350358   73264 main.go:141] libmachine: (auto-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:56:51", ip: ""} in network mk-auto-159277: {Iface:virbr4 ExpiryTime:2024-08-04 01:35:57 +0000 UTC Type:0 Mac:52:54:00:99:56:51 Iaid: IPaddr:192.168.72.144 Prefix:24 Hostname:auto-159277 Clientid:01:52:54:00:99:56:51}
	I0804 00:36:40.350386   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined IP address 192.168.72.144 and MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:36:40.350633   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHPort
	I0804 00:36:40.350834   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHKeyPath
	I0804 00:36:40.350986   73264 main.go:141] libmachine: (auto-159277) Calling .GetSSHUsername
	I0804 00:36:40.351102   73264 sshutil.go:53] new ssh client: &{IP:192.168.72.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277/id_rsa Username:docker}
	I0804 00:36:40.560466   73264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:36:40.560681   73264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0804 00:36:40.683368   73264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:36:40.759607   73264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:36:41.008099   73264 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0804 00:36:41.009422   73264 node_ready.go:35] waiting up to 15m0s for node "auto-159277" to be "Ready" ...
	I0804 00:36:41.022719   73264 node_ready.go:49] node "auto-159277" has status "Ready":"True"
	I0804 00:36:41.022749   73264 node_ready.go:38] duration metric: took 13.300096ms for node "auto-159277" to be "Ready" ...
	I0804 00:36:41.022767   73264 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:36:41.055406   73264 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace to be "Ready" ...
	I0804 00:36:41.310144   73264 main.go:141] libmachine: Making call to close driver server
	I0804 00:36:41.310181   73264 main.go:141] libmachine: (auto-159277) Calling .Close
	I0804 00:36:41.310239   73264 main.go:141] libmachine: Making call to close driver server
	I0804 00:36:41.310264   73264 main.go:141] libmachine: (auto-159277) Calling .Close
	I0804 00:36:41.310622   73264 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:36:41.310640   73264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:36:41.310657   73264 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:36:41.310650   73264 main.go:141] libmachine: Making call to close driver server
	I0804 00:36:41.310675   73264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:36:41.310690   73264 main.go:141] libmachine: Making call to close driver server
	I0804 00:36:41.310701   73264 main.go:141] libmachine: (auto-159277) Calling .Close
	I0804 00:36:41.310676   73264 main.go:141] libmachine: (auto-159277) Calling .Close
	I0804 00:36:41.310621   73264 main.go:141] libmachine: (auto-159277) DBG | Closing plugin on server side
	I0804 00:36:41.310814   73264 main.go:141] libmachine: (auto-159277) DBG | Closing plugin on server side
	I0804 00:36:41.312337   73264 main.go:141] libmachine: (auto-159277) DBG | Closing plugin on server side
	I0804 00:36:41.312339   73264 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:36:41.312360   73264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:36:41.312344   73264 main.go:141] libmachine: (auto-159277) DBG | Closing plugin on server side
	I0804 00:36:41.312408   73264 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:36:41.312430   73264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:36:41.329451   73264 main.go:141] libmachine: Making call to close driver server
	I0804 00:36:41.329477   73264 main.go:141] libmachine: (auto-159277) Calling .Close
	I0804 00:36:41.329781   73264 main.go:141] libmachine: (auto-159277) DBG | Closing plugin on server side
	I0804 00:36:41.329820   73264 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:36:41.329827   73264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:36:41.332373   73264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0804 00:36:41.333611   73264 addons.go:510] duration metric: took 1.054675444s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0804 00:36:41.512700   73264 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-159277" context rescaled to 1 replicas
	I0804 00:36:39.090508   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:39.091020   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:39.091043   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:39.090981   74238 retry.go:31] will retry after 1.457105419s: waiting for machine to come up
	I0804 00:36:40.550774   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:40.551279   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:40.551306   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:40.551225   74238 retry.go:31] will retry after 1.632616982s: waiting for machine to come up
	I0804 00:36:42.185206   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:42.185532   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:42.185562   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:42.185507   74238 retry.go:31] will retry after 2.103525066s: waiting for machine to come up
	I0804 00:36:40.908285   73669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:36:40.913165   73669 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 00:36:40.913228   73669 kubeadm.go:392] StartCluster: {Name:kindnet-159277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:kindnet-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:36:40.913322   73669 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:36:40.913407   73669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:36:40.964007   73669 cri.go:89] found id: ""
	I0804 00:36:40.964098   73669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:36:40.976438   73669 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:36:40.987025   73669 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:36:40.998381   73669 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:36:40.998408   73669 kubeadm.go:157] found existing configuration files:
	
	I0804 00:36:40.998487   73669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:36:41.010833   73669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:36:41.010888   73669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:36:41.023809   73669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:36:41.039371   73669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:36:41.039438   73669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:36:41.050842   73669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:36:41.064209   73669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:36:41.064275   73669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:36:41.076363   73669 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:36:41.086690   73669 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:36:41.086768   73669 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:36:41.100065   73669 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:36:41.314755   73669 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:36:43.063512   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:36:45.562153   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:36:44.291017   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:44.291600   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:44.291624   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:44.291546   74238 retry.go:31] will retry after 2.158582992s: waiting for machine to come up
	I0804 00:36:46.452225   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:46.452836   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:46.452869   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:46.452783   74238 retry.go:31] will retry after 3.317392459s: waiting for machine to come up
	I0804 00:36:48.063013   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:36:50.063477   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:36:52.209227   73669 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0804 00:36:52.209291   73669 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:36:52.209401   73669 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:36:52.209503   73669 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:36:52.209641   73669 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:36:52.209712   73669 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:36:52.211374   73669 out.go:204]   - Generating certificates and keys ...
	I0804 00:36:52.211469   73669 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:36:52.211559   73669 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:36:52.211667   73669 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 00:36:52.211755   73669 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 00:36:52.211868   73669 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 00:36:52.211943   73669 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 00:36:52.212022   73669 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 00:36:52.212156   73669 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-159277 localhost] and IPs [192.168.50.99 127.0.0.1 ::1]
	I0804 00:36:52.212243   73669 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 00:36:52.212378   73669 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-159277 localhost] and IPs [192.168.50.99 127.0.0.1 ::1]
	I0804 00:36:52.212485   73669 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 00:36:52.212575   73669 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 00:36:52.212639   73669 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 00:36:52.212715   73669 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:36:52.212771   73669 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:36:52.212839   73669 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 00:36:52.212904   73669 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:36:52.212987   73669 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:36:52.213082   73669 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:36:52.213188   73669 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:36:52.213293   73669 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:36:52.214546   73669 out.go:204]   - Booting up control plane ...
	I0804 00:36:52.214630   73669 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:36:52.214717   73669 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:36:52.214793   73669 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:36:52.214919   73669 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:36:52.215026   73669 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:36:52.215084   73669 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:36:52.215227   73669 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 00:36:52.215289   73669 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0804 00:36:52.215337   73669 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.37286ms
	I0804 00:36:52.215395   73669 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 00:36:52.215470   73669 kubeadm.go:310] [api-check] The API server is healthy after 5.501766355s
	I0804 00:36:52.215600   73669 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 00:36:52.215724   73669 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 00:36:52.215797   73669 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 00:36:52.216054   73669 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-159277 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 00:36:52.216151   73669 kubeadm.go:310] [bootstrap-token] Using token: qk22wq.xtrz4sig40n5wph6
	I0804 00:36:52.217556   73669 out.go:204]   - Configuring RBAC rules ...
	I0804 00:36:52.217669   73669 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 00:36:52.217741   73669 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 00:36:52.217857   73669 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 00:36:52.218022   73669 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 00:36:52.218178   73669 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 00:36:52.218250   73669 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 00:36:52.218377   73669 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 00:36:52.218432   73669 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 00:36:52.218487   73669 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 00:36:52.218498   73669 kubeadm.go:310] 
	I0804 00:36:52.218558   73669 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 00:36:52.218564   73669 kubeadm.go:310] 
	I0804 00:36:52.218628   73669 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 00:36:52.218637   73669 kubeadm.go:310] 
	I0804 00:36:52.218676   73669 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 00:36:52.218749   73669 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 00:36:52.218820   73669 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 00:36:52.218828   73669 kubeadm.go:310] 
	I0804 00:36:52.218901   73669 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 00:36:52.218910   73669 kubeadm.go:310] 
	I0804 00:36:52.218978   73669 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 00:36:52.218990   73669 kubeadm.go:310] 
	I0804 00:36:52.219072   73669 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 00:36:52.219177   73669 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 00:36:52.219271   73669 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 00:36:52.219283   73669 kubeadm.go:310] 
	I0804 00:36:52.219402   73669 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 00:36:52.219505   73669 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 00:36:52.219514   73669 kubeadm.go:310] 
	I0804 00:36:52.219620   73669 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qk22wq.xtrz4sig40n5wph6 \
	I0804 00:36:52.219757   73669 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0804 00:36:52.219788   73669 kubeadm.go:310] 	--control-plane 
	I0804 00:36:52.219802   73669 kubeadm.go:310] 
	I0804 00:36:52.219914   73669 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 00:36:52.219924   73669 kubeadm.go:310] 
	I0804 00:36:52.220027   73669 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qk22wq.xtrz4sig40n5wph6 \
	I0804 00:36:52.220187   73669 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0804 00:36:52.220210   73669 cni.go:84] Creating CNI manager for "kindnet"
	I0804 00:36:52.221688   73669 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0804 00:36:49.771999   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:49.772529   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:49.772559   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:49.772471   74238 retry.go:31] will retry after 4.53405488s: waiting for machine to come up
	I0804 00:36:52.223044   73669 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0804 00:36:52.230109   73669 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0804 00:36:52.230129   73669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0804 00:36:52.251390   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0804 00:36:52.541089   73669 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:36:52.541232   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:52.541234   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-159277 minikube.k8s.io/updated_at=2024_08_04T00_36_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=kindnet-159277 minikube.k8s.io/primary=true
	I0804 00:36:52.724980   73669 ops.go:34] apiserver oom_adj: -16
	I0804 00:36:52.725159   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:53.225344   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:53.725764   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:54.225461   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:54.726024   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:55.225863   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:55.725433   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:52.063858   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:36:54.564569   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:36:54.307791   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:54.308356   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find current IP address of domain calico-159277 in network mk-calico-159277
	I0804 00:36:54.308381   73844 main.go:141] libmachine: (calico-159277) DBG | I0804 00:36:54.308331   74238 retry.go:31] will retry after 4.272361965s: waiting for machine to come up
	I0804 00:36:58.582943   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:58.583422   73844 main.go:141] libmachine: (calico-159277) Found IP for machine: 192.168.61.250
	I0804 00:36:58.583448   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has current primary IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:58.583456   73844 main.go:141] libmachine: (calico-159277) Reserving static IP address...
	I0804 00:36:58.583809   73844 main.go:141] libmachine: (calico-159277) DBG | unable to find host DHCP lease matching {name: "calico-159277", mac: "52:54:00:1b:8d:00", ip: "192.168.61.250"} in network mk-calico-159277
	I0804 00:36:58.661664   73844 main.go:141] libmachine: (calico-159277) Reserved static IP address: 192.168.61.250
	I0804 00:36:58.661702   73844 main.go:141] libmachine: (calico-159277) DBG | Getting to WaitForSSH function...
	I0804 00:36:58.661711   73844 main.go:141] libmachine: (calico-159277) Waiting for SSH to be available...
	I0804 00:36:58.664985   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:58.665530   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:58.665560   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:58.665768   73844 main.go:141] libmachine: (calico-159277) DBG | Using SSH client type: external
	I0804 00:36:58.665796   73844 main.go:141] libmachine: (calico-159277) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277/id_rsa (-rw-------)
	I0804 00:36:58.665821   73844 main.go:141] libmachine: (calico-159277) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:36:58.665839   73844 main.go:141] libmachine: (calico-159277) DBG | About to run SSH command:
	I0804 00:36:58.665852   73844 main.go:141] libmachine: (calico-159277) DBG | exit 0
	I0804 00:36:56.225683   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:56.725882   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:57.225428   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:57.725981   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:58.225726   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:58.725934   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:59.225826   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:59.726246   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:00.225539   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:00.725298   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:36:58.789885   73844 main.go:141] libmachine: (calico-159277) DBG | SSH cmd err, output: <nil>: 
	I0804 00:36:58.790150   73844 main.go:141] libmachine: (calico-159277) KVM machine creation complete!
	I0804 00:36:58.790533   73844 main.go:141] libmachine: (calico-159277) Calling .GetConfigRaw
	I0804 00:36:58.791155   73844 main.go:141] libmachine: (calico-159277) Calling .DriverName
	I0804 00:36:58.791407   73844 main.go:141] libmachine: (calico-159277) Calling .DriverName
	I0804 00:36:58.791620   73844 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 00:36:58.791636   73844 main.go:141] libmachine: (calico-159277) Calling .GetState
	I0804 00:36:58.793615   73844 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 00:36:58.793628   73844 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 00:36:58.793634   73844 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 00:36:58.793640   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHHostname
	I0804 00:36:58.796553   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:58.796993   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:58.797023   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:58.797128   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHPort
	I0804 00:36:58.797298   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:58.797470   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:58.797618   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHUsername
	I0804 00:36:58.797774   73844 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:58.798021   73844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.250 22 <nil> <nil>}
	I0804 00:36:58.798035   73844 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 00:36:58.904944   73844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:36:58.904965   73844 main.go:141] libmachine: Detecting the provisioner...
	I0804 00:36:58.904974   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHHostname
	I0804 00:36:58.908083   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:58.908587   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:58.908618   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:58.908723   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHPort
	I0804 00:36:58.909049   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:58.909254   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:58.909460   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHUsername
	I0804 00:36:58.909665   73844 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:58.909854   73844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.250 22 <nil> <nil>}
	I0804 00:36:58.909867   73844 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 00:36:59.018333   73844 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 00:36:59.018423   73844 main.go:141] libmachine: found compatible host: buildroot
	I0804 00:36:59.018437   73844 main.go:141] libmachine: Provisioning with buildroot...
	I0804 00:36:59.018449   73844 main.go:141] libmachine: (calico-159277) Calling .GetMachineName
	I0804 00:36:59.018743   73844 buildroot.go:166] provisioning hostname "calico-159277"
	I0804 00:36:59.018771   73844 main.go:141] libmachine: (calico-159277) Calling .GetMachineName
	I0804 00:36:59.018997   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHHostname
	I0804 00:36:59.021903   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.022299   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:59.022335   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.022454   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHPort
	I0804 00:36:59.022633   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:59.022821   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:59.023020   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHUsername
	I0804 00:36:59.023215   73844 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:59.023438   73844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.250 22 <nil> <nil>}
	I0804 00:36:59.023453   73844 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-159277 && echo "calico-159277" | sudo tee /etc/hostname
	I0804 00:36:59.146022   73844 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-159277
	
	I0804 00:36:59.146054   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHHostname
	I0804 00:36:59.148963   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.149398   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:59.149427   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.149589   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHPort
	I0804 00:36:59.149782   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:59.149938   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:59.150096   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHUsername
	I0804 00:36:59.150300   73844 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:59.150472   73844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.250 22 <nil> <nil>}
	I0804 00:36:59.150488   73844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-159277' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-159277/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-159277' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:36:59.267359   73844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:36:59.267391   73844 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:36:59.267446   73844 buildroot.go:174] setting up certificates
	I0804 00:36:59.267460   73844 provision.go:84] configureAuth start
	I0804 00:36:59.267473   73844 main.go:141] libmachine: (calico-159277) Calling .GetMachineName
	I0804 00:36:59.267772   73844 main.go:141] libmachine: (calico-159277) Calling .GetIP
	I0804 00:36:59.270873   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.271297   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:59.271324   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.271493   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHHostname
	I0804 00:36:59.274356   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.274755   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:59.274784   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.274955   73844 provision.go:143] copyHostCerts
	I0804 00:36:59.275021   73844 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:36:59.275033   73844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:36:59.275114   73844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:36:59.275261   73844 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:36:59.275273   73844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:36:59.275297   73844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:36:59.275347   73844 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:36:59.275354   73844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:36:59.275372   73844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:36:59.275413   73844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.calico-159277 san=[127.0.0.1 192.168.61.250 calico-159277 localhost minikube]
	I0804 00:36:59.397702   73844 provision.go:177] copyRemoteCerts
	I0804 00:36:59.397763   73844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:36:59.397784   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHHostname
	I0804 00:36:59.401140   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.401563   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:59.401591   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.401800   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHPort
	I0804 00:36:59.401979   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:59.402144   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHUsername
	I0804 00:36:59.402281   73844 sshutil.go:53] new ssh client: &{IP:192.168.61.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277/id_rsa Username:docker}
	I0804 00:36:59.487865   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:36:59.516716   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0804 00:36:59.542706   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:36:59.569384   73844 provision.go:87] duration metric: took 301.911519ms to configureAuth
	I0804 00:36:59.569412   73844 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:36:59.569629   73844 config.go:182] Loaded profile config "calico-159277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:36:59.569706   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHHostname
	I0804 00:36:59.572589   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.572972   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:59.573010   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.573239   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHPort
	I0804 00:36:59.573446   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:59.573592   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:59.573807   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHUsername
	I0804 00:36:59.574030   73844 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:59.574236   73844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.250 22 <nil> <nil>}
	I0804 00:36:59.574261   73844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:36:59.858168   73844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:36:59.858201   73844 main.go:141] libmachine: Checking connection to Docker...
	I0804 00:36:59.858212   73844 main.go:141] libmachine: (calico-159277) Calling .GetURL
	I0804 00:36:59.859527   73844 main.go:141] libmachine: (calico-159277) DBG | Using libvirt version 6000000
	I0804 00:36:59.861653   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.862019   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:59.862039   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.862195   73844 main.go:141] libmachine: Docker is up and running!
	I0804 00:36:59.862210   73844 main.go:141] libmachine: Reticulating splines...
	I0804 00:36:59.862217   73844 client.go:171] duration metric: took 26.166068607s to LocalClient.Create
	I0804 00:36:59.862240   73844 start.go:167] duration metric: took 26.166135051s to libmachine.API.Create "calico-159277"
	I0804 00:36:59.862261   73844 start.go:293] postStartSetup for "calico-159277" (driver="kvm2")
	I0804 00:36:59.862273   73844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:36:59.862289   73844 main.go:141] libmachine: (calico-159277) Calling .DriverName
	I0804 00:36:59.862527   73844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:36:59.862554   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHHostname
	I0804 00:36:59.864726   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.865124   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:59.865147   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.865316   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHPort
	I0804 00:36:59.865555   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:59.865782   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHUsername
	I0804 00:36:59.865991   73844 sshutil.go:53] new ssh client: &{IP:192.168.61.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277/id_rsa Username:docker}
	I0804 00:36:59.948422   73844 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:36:59.953266   73844 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:36:59.953296   73844 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:36:59.953381   73844 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:36:59.953489   73844 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:36:59.953598   73844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:36:59.963297   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:36:59.987627   73844 start.go:296] duration metric: took 125.35041ms for postStartSetup
	I0804 00:36:59.987684   73844 main.go:141] libmachine: (calico-159277) Calling .GetConfigRaw
	I0804 00:36:59.988308   73844 main.go:141] libmachine: (calico-159277) Calling .GetIP
	I0804 00:36:59.991315   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.991681   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:59.991706   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.991982   73844 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/config.json ...
	I0804 00:36:59.992196   73844 start.go:128] duration metric: took 26.321579752s to createHost
	I0804 00:36:59.992219   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHHostname
	I0804 00:36:59.994438   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.994769   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:36:59.994789   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:36:59.994922   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHPort
	I0804 00:36:59.995123   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:59.995277   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:36:59.995427   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHUsername
	I0804 00:36:59.995598   73844 main.go:141] libmachine: Using SSH client type: native
	I0804 00:36:59.995792   73844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.250 22 <nil> <nil>}
	I0804 00:36:59.995805   73844 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:37:00.102484   73844 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731820.056163451
	
	I0804 00:37:00.102507   73844 fix.go:216] guest clock: 1722731820.056163451
	I0804 00:37:00.102517   73844 fix.go:229] Guest: 2024-08-04 00:37:00.056163451 +0000 UTC Remote: 2024-08-04 00:36:59.992208996 +0000 UTC m=+66.352774794 (delta=63.954455ms)
	I0804 00:37:00.102558   73844 fix.go:200] guest clock delta is within tolerance: 63.954455ms
	I0804 00:37:00.102573   73844 start.go:83] releasing machines lock for "calico-159277", held for 26.43209765s
	I0804 00:37:00.102601   73844 main.go:141] libmachine: (calico-159277) Calling .DriverName
	I0804 00:37:00.102848   73844 main.go:141] libmachine: (calico-159277) Calling .GetIP
	I0804 00:37:00.105755   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:37:00.106122   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:37:00.106152   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:37:00.106311   73844 main.go:141] libmachine: (calico-159277) Calling .DriverName
	I0804 00:37:00.106807   73844 main.go:141] libmachine: (calico-159277) Calling .DriverName
	I0804 00:37:00.107003   73844 main.go:141] libmachine: (calico-159277) Calling .DriverName
	I0804 00:37:00.107107   73844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:37:00.107144   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHHostname
	I0804 00:37:00.107201   73844 ssh_runner.go:195] Run: cat /version.json
	I0804 00:37:00.107220   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHHostname
	I0804 00:37:00.109880   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:37:00.110253   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:37:00.110285   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:37:00.110307   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:37:00.110427   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHPort
	I0804 00:37:00.110598   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:37:00.110682   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:37:00.110714   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:37:00.110775   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHUsername
	I0804 00:37:00.110862   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHPort
	I0804 00:37:00.110923   73844 sshutil.go:53] new ssh client: &{IP:192.168.61.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277/id_rsa Username:docker}
	I0804 00:37:00.110986   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHKeyPath
	I0804 00:37:00.111092   73844 main.go:141] libmachine: (calico-159277) Calling .GetSSHUsername
	I0804 00:37:00.111206   73844 sshutil.go:53] new ssh client: &{IP:192.168.61.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/calico-159277/id_rsa Username:docker}
	I0804 00:37:00.190420   73844 ssh_runner.go:195] Run: systemctl --version
	I0804 00:37:00.217267   73844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:37:00.387049   73844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:37:00.394188   73844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:37:00.394259   73844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:37:00.412982   73844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:37:00.413027   73844 start.go:495] detecting cgroup driver to use...
	I0804 00:37:00.413117   73844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:37:00.430560   73844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:37:00.445088   73844 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:37:00.445165   73844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:37:00.459122   73844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:37:00.474838   73844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:37:00.604222   73844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:37:00.778612   73844 docker.go:233] disabling docker service ...
	I0804 00:37:00.778694   73844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:37:00.794604   73844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:37:00.809435   73844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:37:00.960725   73844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:37:01.082634   73844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:37:01.097210   73844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:37:01.118018   73844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:37:01.118087   73844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:37:01.129184   73844 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:37:01.129256   73844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:37:01.141454   73844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:37:01.153081   73844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:37:01.164748   73844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:37:01.176150   73844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:37:01.186861   73844 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:37:01.206239   73844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:37:01.217244   73844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:37:01.228653   73844 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:37:01.228712   73844 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:37:01.242767   73844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:37:01.253238   73844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:37:01.379940   73844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:37:01.529480   73844 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:37:01.529554   73844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:37:01.534593   73844 start.go:563] Will wait 60s for crictl version
	I0804 00:37:01.534650   73844 ssh_runner.go:195] Run: which crictl
	I0804 00:37:01.538532   73844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:37:01.577045   73844 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:37:01.577127   73844 ssh_runner.go:195] Run: crio --version
	I0804 00:37:01.608103   73844 ssh_runner.go:195] Run: crio --version
	I0804 00:37:01.638777   73844 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:36:57.062575   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:36:59.563719   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:37:01.640076   73844 main.go:141] libmachine: (calico-159277) Calling .GetIP
	I0804 00:37:01.642991   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:37:01.643335   73844 main.go:141] libmachine: (calico-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8d:00", ip: ""} in network mk-calico-159277: {Iface:virbr3 ExpiryTime:2024-08-04 01:36:49 +0000 UTC Type:0 Mac:52:54:00:1b:8d:00 Iaid: IPaddr:192.168.61.250 Prefix:24 Hostname:calico-159277 Clientid:01:52:54:00:1b:8d:00}
	I0804 00:37:01.643362   73844 main.go:141] libmachine: (calico-159277) DBG | domain calico-159277 has defined IP address 192.168.61.250 and MAC address 52:54:00:1b:8d:00 in network mk-calico-159277
	I0804 00:37:01.643542   73844 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0804 00:37:01.648098   73844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:37:01.660972   73844 kubeadm.go:883] updating cluster {Name:calico-159277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:calico-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:37:01.661088   73844 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:37:01.661148   73844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:37:01.693600   73844 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:37:01.693683   73844 ssh_runner.go:195] Run: which lz4
	I0804 00:37:01.697927   73844 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:37:01.702505   73844 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:37:01.702537   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:37:03.245010   73844 crio.go:462] duration metric: took 1.547140865s to copy over tarball
	I0804 00:37:03.245088   73844 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:37:01.225541   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:01.725507   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:02.225239   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:02.726044   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:03.226016   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:03.725506   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:04.225784   73669 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:04.345856   73669 kubeadm.go:1113] duration metric: took 11.804685931s to wait for elevateKubeSystemPrivileges
	I0804 00:37:04.345893   73669 kubeadm.go:394] duration metric: took 23.432669887s to StartCluster
	I0804 00:37:04.345916   73669 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:37:04.346007   73669 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:37:04.347214   73669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:37:04.347479   73669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0804 00:37:04.347491   73669 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:37:04.347581   73669 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:37:04.347660   73669 addons.go:69] Setting storage-provisioner=true in profile "kindnet-159277"
	I0804 00:37:04.347686   73669 addons.go:234] Setting addon storage-provisioner=true in "kindnet-159277"
	I0804 00:37:04.347691   73669 addons.go:69] Setting default-storageclass=true in profile "kindnet-159277"
	I0804 00:37:04.347716   73669 host.go:66] Checking if "kindnet-159277" exists ...
	I0804 00:37:04.347723   73669 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-159277"
	I0804 00:37:04.347748   73669 config.go:182] Loaded profile config "kindnet-159277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:37:04.348153   73669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:37:04.348184   73669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:37:04.348200   73669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:04.348222   73669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:04.350267   73669 out.go:177] * Verifying Kubernetes components...
	I0804 00:37:04.351830   73669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:37:04.367852   73669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I0804 00:37:04.368419   73669 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:04.369057   73669 main.go:141] libmachine: Using API Version  1
	I0804 00:37:04.369080   73669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:04.369470   73669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45305
	I0804 00:37:04.369484   73669 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:04.369697   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetState
	I0804 00:37:04.369953   73669 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:04.370634   73669 main.go:141] libmachine: Using API Version  1
	I0804 00:37:04.370656   73669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:04.370996   73669 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:04.371683   73669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:37:04.371728   73669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:04.373795   73669 addons.go:234] Setting addon default-storageclass=true in "kindnet-159277"
	I0804 00:37:04.373844   73669 host.go:66] Checking if "kindnet-159277" exists ...
	I0804 00:37:04.374229   73669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:37:04.374269   73669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:04.392671   73669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43373
	I0804 00:37:04.392941   73669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43009
	I0804 00:37:04.393286   73669 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:04.393479   73669 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:04.393888   73669 main.go:141] libmachine: Using API Version  1
	I0804 00:37:04.393967   73669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:04.394095   73669 main.go:141] libmachine: Using API Version  1
	I0804 00:37:04.394118   73669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:04.394434   73669 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:04.394494   73669 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:04.394695   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetState
	I0804 00:37:04.395041   73669 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:37:04.395091   73669 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:04.396604   73669 main.go:141] libmachine: (kindnet-159277) Calling .DriverName
	I0804 00:37:04.398755   73669 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:37:04.400279   73669 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:37:04.400300   73669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:37:04.400320   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:37:04.403693   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:37:04.404272   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:37:04.404295   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:37:04.404529   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHPort
	I0804 00:37:04.404729   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:37:04.404908   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHUsername
	I0804 00:37:04.405221   73669 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277/id_rsa Username:docker}
	I0804 00:37:04.417065   73669 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33567
	I0804 00:37:04.417614   73669 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:04.418348   73669 main.go:141] libmachine: Using API Version  1
	I0804 00:37:04.418376   73669 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:04.418853   73669 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:04.419062   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetState
	I0804 00:37:04.420789   73669 main.go:141] libmachine: (kindnet-159277) Calling .DriverName
	I0804 00:37:04.421069   73669 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:37:04.421086   73669 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:37:04.421105   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHHostname
	I0804 00:37:04.424363   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:37:04.424896   73669 main.go:141] libmachine: (kindnet-159277) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:cb:f3", ip: ""} in network mk-kindnet-159277: {Iface:virbr1 ExpiryTime:2024-08-04 01:36:23 +0000 UTC Type:0 Mac:52:54:00:8f:cb:f3 Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:kindnet-159277 Clientid:01:52:54:00:8f:cb:f3}
	I0804 00:37:04.424914   73669 main.go:141] libmachine: (kindnet-159277) DBG | domain kindnet-159277 has defined IP address 192.168.50.99 and MAC address 52:54:00:8f:cb:f3 in network mk-kindnet-159277
	I0804 00:37:04.425062   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHPort
	I0804 00:37:04.425247   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHKeyPath
	I0804 00:37:04.425430   73669 main.go:141] libmachine: (kindnet-159277) Calling .GetSSHUsername
	I0804 00:37:04.425586   73669 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/kindnet-159277/id_rsa Username:docker}
	I0804 00:37:04.503829   73669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0804 00:37:04.562751   73669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:37:04.692211   73669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:37:04.728156   73669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:37:04.996228   73669 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0804 00:37:04.997597   73669 node_ready.go:35] waiting up to 15m0s for node "kindnet-159277" to be "Ready" ...
	I0804 00:37:05.506586   73669 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-159277" context rescaled to 1 replicas
	I0804 00:37:05.555390   73669 main.go:141] libmachine: Making call to close driver server
	I0804 00:37:05.555430   73669 main.go:141] libmachine: (kindnet-159277) Calling .Close
	I0804 00:37:05.555509   73669 main.go:141] libmachine: Making call to close driver server
	I0804 00:37:05.555534   73669 main.go:141] libmachine: (kindnet-159277) Calling .Close
	I0804 00:37:05.555958   73669 main.go:141] libmachine: (kindnet-159277) DBG | Closing plugin on server side
	I0804 00:37:05.556019   73669 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:37:05.556031   73669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:37:05.556047   73669 main.go:141] libmachine: Making call to close driver server
	I0804 00:37:05.556057   73669 main.go:141] libmachine: (kindnet-159277) Calling .Close
	I0804 00:37:05.556130   73669 main.go:141] libmachine: (kindnet-159277) DBG | Closing plugin on server side
	I0804 00:37:05.556145   73669 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:37:05.556223   73669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:37:05.556270   73669 main.go:141] libmachine: Making call to close driver server
	I0804 00:37:05.556305   73669 main.go:141] libmachine: (kindnet-159277) Calling .Close
	I0804 00:37:05.556430   73669 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:37:05.556472   73669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:37:05.556725   73669 main.go:141] libmachine: (kindnet-159277) DBG | Closing plugin on server side
	I0804 00:37:05.558296   73669 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:37:05.558317   73669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:37:05.579673   73669 main.go:141] libmachine: Making call to close driver server
	I0804 00:37:05.579703   73669 main.go:141] libmachine: (kindnet-159277) Calling .Close
	I0804 00:37:05.580011   73669 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:37:05.580032   73669 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:37:05.580033   73669 main.go:141] libmachine: (kindnet-159277) DBG | Closing plugin on server side
	I0804 00:37:05.581764   73669 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0804 00:37:05.582860   73669 addons.go:510] duration metric: took 1.235282647s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0804 00:37:02.063703   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:37:04.064686   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:37:06.562237   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:37:05.857010   73844 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.611892513s)
	I0804 00:37:05.857043   73844 crio.go:469] duration metric: took 2.611998619s to extract the tarball
	I0804 00:37:05.857054   73844 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:37:05.910268   73844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:37:05.972941   73844 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:37:05.972964   73844 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:37:05.972973   73844 kubeadm.go:934] updating node { 192.168.61.250 8443 v1.30.3 crio true true} ...
	I0804 00:37:05.973084   73844 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-159277 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:calico-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0804 00:37:05.973171   73844 ssh_runner.go:195] Run: crio config
	I0804 00:37:06.033685   73844 cni.go:84] Creating CNI manager for "calico"
	I0804 00:37:06.033707   73844 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:37:06.033728   73844 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.250 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-159277 NodeName:calico-159277 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:37:06.033863   73844 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-159277"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:37:06.033922   73844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:37:06.044996   73844 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:37:06.045076   73844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:37:06.056103   73844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0804 00:37:06.075255   73844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:37:06.094292   73844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0804 00:37:06.112476   73844 ssh_runner.go:195] Run: grep 192.168.61.250	control-plane.minikube.internal$ /etc/hosts
	I0804 00:37:06.116460   73844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:37:06.130807   73844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:37:06.268168   73844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:37:06.288532   73844 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277 for IP: 192.168.61.250
	I0804 00:37:06.288560   73844 certs.go:194] generating shared ca certs ...
	I0804 00:37:06.288576   73844 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:37:06.288731   73844 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:37:06.288771   73844 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:37:06.288778   73844 certs.go:256] generating profile certs ...
	I0804 00:37:06.288838   73844 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/client.key
	I0804 00:37:06.288853   73844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/client.crt with IP's: []
	I0804 00:37:06.369313   73844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/client.crt ...
	I0804 00:37:06.369349   73844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/client.crt: {Name:mk1b02c581dc1d3ca717eaa2e2c2c65526d879d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:37:06.369593   73844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/client.key ...
	I0804 00:37:06.369611   73844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/client.key: {Name:mkc8f562a4fd5bb7937b92e53d4c5427b928f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:37:06.369713   73844 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/apiserver.key.35ec55a9
	I0804 00:37:06.369731   73844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/apiserver.crt.35ec55a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.250]
	I0804 00:37:06.640084   73844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/apiserver.crt.35ec55a9 ...
	I0804 00:37:06.640116   73844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/apiserver.crt.35ec55a9: {Name:mkd5acb9e10dfd3eab36fd038df349040747d04e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:37:06.640286   73844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/apiserver.key.35ec55a9 ...
	I0804 00:37:06.640300   73844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/apiserver.key.35ec55a9: {Name:mkfbeb8cc3b0b25b07d810248ab90ee18654e829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:37:06.640369   73844 certs.go:381] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/apiserver.crt.35ec55a9 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/apiserver.crt
	I0804 00:37:06.640466   73844 certs.go:385] copying /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/apiserver.key.35ec55a9 -> /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/apiserver.key
	I0804 00:37:06.640532   73844 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/proxy-client.key
	I0804 00:37:06.640548   73844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/proxy-client.crt with IP's: []
	I0804 00:37:06.759434   73844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/proxy-client.crt ...
	I0804 00:37:06.759466   73844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/proxy-client.crt: {Name:mke194440b95a2f119c0e0aa3b396d055529e5e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:37:06.819819   73844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/proxy-client.key ...
	I0804 00:37:06.819857   73844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/proxy-client.key: {Name:mk928fa15cdc93d995701f38d9ce39b66ce169ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:37:06.820129   73844 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:37:06.820178   73844 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:37:06.820195   73844 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:37:06.820226   73844 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:37:06.820254   73844 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:37:06.820290   73844 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:37:06.820340   73844 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:37:06.821170   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:37:06.864845   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:37:06.892685   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:37:06.918535   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:37:06.944352   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0804 00:37:06.973880   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:37:07.027421   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:37:07.057940   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/calico-159277/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:37:07.084111   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:37:07.109799   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:37:07.139334   73844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:37:07.170162   73844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:37:07.189303   73844 ssh_runner.go:195] Run: openssl version
	I0804 00:37:07.195531   73844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:37:07.208288   73844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:37:07.213604   73844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:37:07.213676   73844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:37:07.220232   73844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:37:07.232719   73844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:37:07.246015   73844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:37:07.251346   73844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:37:07.251417   73844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:37:07.257811   73844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:37:07.269939   73844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:37:07.281675   73844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:37:07.286504   73844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:37:07.286560   73844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:37:07.292395   73844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:37:07.304350   73844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:37:07.308686   73844 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 00:37:07.308738   73844 kubeadm.go:392] StartCluster: {Name:calico-159277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:calico-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:37:07.308803   73844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:37:07.308861   73844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:37:07.349369   73844 cri.go:89] found id: ""
	I0804 00:37:07.349442   73844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:37:07.361023   73844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:37:07.371439   73844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:37:07.381536   73844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:37:07.381565   73844 kubeadm.go:157] found existing configuration files:
	
	I0804 00:37:07.381633   73844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:37:07.392103   73844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:37:07.392165   73844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:37:07.402154   73844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:37:07.411953   73844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:37:07.412021   73844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:37:07.422112   73844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:37:07.432038   73844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:37:07.432093   73844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:37:07.442855   73844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:37:07.454918   73844 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:37:07.454987   73844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:37:07.465982   73844 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:37:07.529064   73844 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0804 00:37:07.529167   73844 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:37:07.679377   73844 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:37:07.679535   73844 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:37:07.679660   73844 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:37:07.936263   73844 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:37:08.145702   73844 out.go:204]   - Generating certificates and keys ...
	I0804 00:37:08.145836   73844 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:37:08.145933   73844 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:37:08.146069   73844 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 00:37:08.262964   73844 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 00:37:08.472990   73844 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 00:37:07.064370   73669 node_ready.go:53] node "kindnet-159277" has status "Ready":"False"
	I0804 00:37:09.501846   73669 node_ready.go:53] node "kindnet-159277" has status "Ready":"False"
	I0804 00:37:08.703077   73844 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 00:37:08.918203   73844 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 00:37:08.918371   73844 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-159277 localhost] and IPs [192.168.61.250 127.0.0.1 ::1]
	I0804 00:37:09.197697   73844 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 00:37:09.197879   73844 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-159277 localhost] and IPs [192.168.61.250 127.0.0.1 ::1]
	I0804 00:37:09.386753   73844 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 00:37:09.485506   73844 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 00:37:09.663120   73844 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 00:37:09.663397   73844 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:37:09.786012   73844 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:37:10.431293   73844 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 00:37:10.562582   73844 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:37:10.780681   73844 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:37:10.941001   73844 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:37:10.943497   73844 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:37:10.945651   73844 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:37:08.563145   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:37:11.062933   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:37:10.947504   73844 out.go:204]   - Booting up control plane ...
	I0804 00:37:10.947608   73844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:37:10.947696   73844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:37:10.948093   73844 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:37:10.963976   73844 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:37:10.965139   73844 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:37:10.965229   73844 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:37:11.095017   73844 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 00:37:11.095101   73844 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0804 00:37:12.099367   73844 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004211748s
	I0804 00:37:12.099498   73844 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 00:37:12.001749   73669 node_ready.go:53] node "kindnet-159277" has status "Ready":"False"
	I0804 00:37:14.002413   73669 node_ready.go:53] node "kindnet-159277" has status "Ready":"False"
	I0804 00:37:13.063846   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:37:15.064025   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:37:17.604388   73844 kubeadm.go:310] [api-check] The API server is healthy after 5.50258748s
	I0804 00:37:17.616252   73844 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 00:37:17.631336   73844 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 00:37:17.659590   73844 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 00:37:17.659869   73844 kubeadm.go:310] [mark-control-plane] Marking the node calico-159277 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 00:37:17.675225   73844 kubeadm.go:310] [bootstrap-token] Using token: 4qalwe.868zp8azza4dt0q8
	I0804 00:37:17.676773   73844 out.go:204]   - Configuring RBAC rules ...
	I0804 00:37:17.676892   73844 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 00:37:17.685163   73844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 00:37:17.694166   73844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 00:37:17.704075   73844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 00:37:17.708265   73844 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 00:37:17.726117   73844 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 00:37:18.013234   73844 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 00:37:18.498673   73844 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 00:37:19.010772   73844 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 00:37:19.010797   73844 kubeadm.go:310] 
	I0804 00:37:19.010867   73844 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 00:37:19.010877   73844 kubeadm.go:310] 
	I0804 00:37:19.010983   73844 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 00:37:19.011002   73844 kubeadm.go:310] 
	I0804 00:37:19.011058   73844 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 00:37:19.011180   73844 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 00:37:19.011255   73844 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 00:37:19.011264   73844 kubeadm.go:310] 
	I0804 00:37:19.011328   73844 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 00:37:19.011336   73844 kubeadm.go:310] 
	I0804 00:37:19.011402   73844 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 00:37:19.011412   73844 kubeadm.go:310] 
	I0804 00:37:19.011474   73844 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 00:37:19.011584   73844 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 00:37:19.011674   73844 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 00:37:19.011693   73844 kubeadm.go:310] 
	I0804 00:37:19.011840   73844 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 00:37:19.011949   73844 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 00:37:19.012087   73844 kubeadm.go:310] 
	I0804 00:37:19.012219   73844 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4qalwe.868zp8azza4dt0q8 \
	I0804 00:37:19.012383   73844 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0804 00:37:19.012420   73844 kubeadm.go:310] 	--control-plane 
	I0804 00:37:19.012429   73844 kubeadm.go:310] 
	I0804 00:37:19.012541   73844 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 00:37:19.012556   73844 kubeadm.go:310] 
	I0804 00:37:19.012660   73844 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4qalwe.868zp8azza4dt0q8 \
	I0804 00:37:19.012826   73844 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0804 00:37:19.013005   73844 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:37:19.013031   73844 cni.go:84] Creating CNI manager for "calico"
	I0804 00:37:19.014737   73844 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0804 00:37:16.502774   73669 node_ready.go:53] node "kindnet-159277" has status "Ready":"False"
	I0804 00:37:19.001792   73669 node_ready.go:53] node "kindnet-159277" has status "Ready":"False"
	I0804 00:37:17.561916   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:37:19.564348   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:37:19.016633   73844 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0804 00:37:19.016653   73844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253815 bytes)
	I0804 00:37:19.038886   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0804 00:37:20.499768   73844 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.460845689s)
	I0804 00:37:20.499812   73844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:37:20.499893   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:20.499934   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-159277 minikube.k8s.io/updated_at=2024_08_04T00_37_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=calico-159277 minikube.k8s.io/primary=true
	I0804 00:37:20.530166   73844 ops.go:34] apiserver oom_adj: -16
	I0804 00:37:20.624824   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:21.125141   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:21.625629   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:22.125232   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:22.625279   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:23.125547   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:23.625543   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:22.062005   73264 pod_ready.go:102] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"False"
	I0804 00:37:22.562428   73264 pod_ready.go:92] pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace has status "Ready":"True"
	I0804 00:37:22.562458   73264 pod_ready.go:81] duration metric: took 41.507016329s for pod "coredns-7db6d8ff4d-27f48" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:22.562470   73264 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-s7pwr" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:22.564889   73264 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-s7pwr" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-s7pwr" not found
	I0804 00:37:22.564915   73264 pod_ready.go:81] duration metric: took 2.437758ms for pod "coredns-7db6d8ff4d-s7pwr" in "kube-system" namespace to be "Ready" ...
	E0804 00:37:22.564925   73264 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-s7pwr" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-s7pwr" not found
	I0804 00:37:22.564931   73264 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:22.571748   73264 pod_ready.go:92] pod "etcd-auto-159277" in "kube-system" namespace has status "Ready":"True"
	I0804 00:37:22.571770   73264 pod_ready.go:81] duration metric: took 6.83312ms for pod "etcd-auto-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:22.571779   73264 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:22.577798   73264 pod_ready.go:92] pod "kube-apiserver-auto-159277" in "kube-system" namespace has status "Ready":"True"
	I0804 00:37:22.577822   73264 pod_ready.go:81] duration metric: took 6.036793ms for pod "kube-apiserver-auto-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:22.577831   73264 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:22.583055   73264 pod_ready.go:92] pod "kube-controller-manager-auto-159277" in "kube-system" namespace has status "Ready":"True"
	I0804 00:37:22.583080   73264 pod_ready.go:81] duration metric: took 5.240513ms for pod "kube-controller-manager-auto-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:22.583092   73264 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-68w85" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:22.760356   73264 pod_ready.go:92] pod "kube-proxy-68w85" in "kube-system" namespace has status "Ready":"True"
	I0804 00:37:22.760387   73264 pod_ready.go:81] duration metric: took 177.287379ms for pod "kube-proxy-68w85" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:22.760400   73264 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:23.159833   73264 pod_ready.go:92] pod "kube-scheduler-auto-159277" in "kube-system" namespace has status "Ready":"True"
	I0804 00:37:23.159863   73264 pod_ready.go:81] duration metric: took 399.454597ms for pod "kube-scheduler-auto-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:23.159874   73264 pod_ready.go:38] duration metric: took 42.137093925s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:37:23.159894   73264 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:37:23.159953   73264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:37:23.177128   73264 api_server.go:72] duration metric: took 42.898245529s to wait for apiserver process to appear ...
	I0804 00:37:23.177158   73264 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:37:23.177177   73264 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I0804 00:37:23.182231   73264 api_server.go:279] https://192.168.72.144:8443/healthz returned 200:
	ok
	I0804 00:37:23.183128   73264 api_server.go:141] control plane version: v1.30.3
	I0804 00:37:23.183156   73264 api_server.go:131] duration metric: took 5.99223ms to wait for apiserver health ...
	I0804 00:37:23.183164   73264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:37:23.363374   73264 system_pods.go:59] 7 kube-system pods found
	I0804 00:37:23.363404   73264 system_pods.go:61] "coredns-7db6d8ff4d-27f48" [8305cf9d-2cd5-4c4e-ace3-9571c033f6ff] Running
	I0804 00:37:23.363408   73264 system_pods.go:61] "etcd-auto-159277" [0a99b86a-9245-42cb-bde3-a6345309c234] Running
	I0804 00:37:23.363412   73264 system_pods.go:61] "kube-apiserver-auto-159277" [913e365e-ac35-49f5-9862-167126be3d99] Running
	I0804 00:37:23.363415   73264 system_pods.go:61] "kube-controller-manager-auto-159277" [24fc58a1-5c8d-42cb-977f-5136c2adae1f] Running
	I0804 00:37:23.363420   73264 system_pods.go:61] "kube-proxy-68w85" [402172bb-53bc-4060-8f60-b274c6e637ca] Running
	I0804 00:37:23.363423   73264 system_pods.go:61] "kube-scheduler-auto-159277" [0a4e9c74-aeb7-495b-ae1c-7fac24f09ad5] Running
	I0804 00:37:23.363426   73264 system_pods.go:61] "storage-provisioner" [0dd4cfb7-fcb0-45fa-b5e7-ad4992f2b4a7] Running
	I0804 00:37:23.363433   73264 system_pods.go:74] duration metric: took 180.26297ms to wait for pod list to return data ...
	I0804 00:37:23.363441   73264 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:37:23.558889   73264 default_sa.go:45] found service account: "default"
	I0804 00:37:23.558924   73264 default_sa.go:55] duration metric: took 195.476266ms for default service account to be created ...
	I0804 00:37:23.558936   73264 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:37:23.764045   73264 system_pods.go:86] 7 kube-system pods found
	I0804 00:37:23.764082   73264 system_pods.go:89] "coredns-7db6d8ff4d-27f48" [8305cf9d-2cd5-4c4e-ace3-9571c033f6ff] Running
	I0804 00:37:23.764091   73264 system_pods.go:89] "etcd-auto-159277" [0a99b86a-9245-42cb-bde3-a6345309c234] Running
	I0804 00:37:23.764097   73264 system_pods.go:89] "kube-apiserver-auto-159277" [913e365e-ac35-49f5-9862-167126be3d99] Running
	I0804 00:37:23.764103   73264 system_pods.go:89] "kube-controller-manager-auto-159277" [24fc58a1-5c8d-42cb-977f-5136c2adae1f] Running
	I0804 00:37:23.764109   73264 system_pods.go:89] "kube-proxy-68w85" [402172bb-53bc-4060-8f60-b274c6e637ca] Running
	I0804 00:37:23.764115   73264 system_pods.go:89] "kube-scheduler-auto-159277" [0a4e9c74-aeb7-495b-ae1c-7fac24f09ad5] Running
	I0804 00:37:23.764121   73264 system_pods.go:89] "storage-provisioner" [0dd4cfb7-fcb0-45fa-b5e7-ad4992f2b4a7] Running
	I0804 00:37:23.764129   73264 system_pods.go:126] duration metric: took 205.186911ms to wait for k8s-apps to be running ...
	I0804 00:37:23.764176   73264 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:37:23.764229   73264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:37:23.786595   73264 system_svc.go:56] duration metric: took 22.409132ms WaitForService to wait for kubelet
	I0804 00:37:23.786635   73264 kubeadm.go:582] duration metric: took 43.507756777s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:37:23.786666   73264 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:37:23.959037   73264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:37:23.959068   73264 node_conditions.go:123] node cpu capacity is 2
	I0804 00:37:23.959081   73264 node_conditions.go:105] duration metric: took 172.40799ms to run NodePressure ...
	I0804 00:37:23.959094   73264 start.go:241] waiting for startup goroutines ...
	I0804 00:37:23.959101   73264 start.go:246] waiting for cluster config update ...
	I0804 00:37:23.959113   73264 start.go:255] writing updated cluster config ...
	I0804 00:37:23.959363   73264 ssh_runner.go:195] Run: rm -f paused
	I0804 00:37:24.011676   73264 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:37:24.013639   73264 out.go:177] * Done! kubectl is now configured to use "auto-159277" cluster and "default" namespace by default
	I0804 00:37:21.002393   73669 node_ready.go:53] node "kindnet-159277" has status "Ready":"False"
	I0804 00:37:22.501921   73669 node_ready.go:49] node "kindnet-159277" has status "Ready":"True"
	I0804 00:37:22.501948   73669 node_ready.go:38] duration metric: took 17.504320782s for node "kindnet-159277" to be "Ready" ...
	I0804 00:37:22.501959   73669 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:37:22.510258   73669 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-zq74b" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:24.017795   73669 pod_ready.go:92] pod "coredns-7db6d8ff4d-zq74b" in "kube-system" namespace has status "Ready":"True"
	I0804 00:37:24.017821   73669 pod_ready.go:81] duration metric: took 1.507527882s for pod "coredns-7db6d8ff4d-zq74b" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:24.017834   73669 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:24.023866   73669 pod_ready.go:92] pod "etcd-kindnet-159277" in "kube-system" namespace has status "Ready":"True"
	I0804 00:37:24.023883   73669 pod_ready.go:81] duration metric: took 6.041952ms for pod "etcd-kindnet-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:24.023893   73669 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:24.028128   73669 pod_ready.go:92] pod "kube-apiserver-kindnet-159277" in "kube-system" namespace has status "Ready":"True"
	I0804 00:37:24.028148   73669 pod_ready.go:81] duration metric: took 4.248744ms for pod "kube-apiserver-kindnet-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:24.028157   73669 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:24.032494   73669 pod_ready.go:92] pod "kube-controller-manager-kindnet-159277" in "kube-system" namespace has status "Ready":"True"
	I0804 00:37:24.032517   73669 pod_ready.go:81] duration metric: took 4.352654ms for pod "kube-controller-manager-kindnet-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:24.032529   73669 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-6jbxl" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:24.102926   73669 pod_ready.go:92] pod "kube-proxy-6jbxl" in "kube-system" namespace has status "Ready":"True"
	I0804 00:37:24.102962   73669 pod_ready.go:81] duration metric: took 70.424122ms for pod "kube-proxy-6jbxl" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:24.102976   73669 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:24.502265   73669 pod_ready.go:92] pod "kube-scheduler-kindnet-159277" in "kube-system" namespace has status "Ready":"True"
	I0804 00:37:24.502290   73669 pod_ready.go:81] duration metric: took 399.306455ms for pod "kube-scheduler-kindnet-159277" in "kube-system" namespace to be "Ready" ...
	I0804 00:37:24.502301   73669 pod_ready.go:38] duration metric: took 2.00033003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:37:24.502317   73669 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:37:24.502381   73669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:37:24.522147   73669 api_server.go:72] duration metric: took 20.174613394s to wait for apiserver process to appear ...
	I0804 00:37:24.522182   73669 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:37:24.522204   73669 api_server.go:253] Checking apiserver healthz at https://192.168.50.99:8443/healthz ...
	I0804 00:37:24.526818   73669 api_server.go:279] https://192.168.50.99:8443/healthz returned 200:
	ok
	I0804 00:37:24.528067   73669 api_server.go:141] control plane version: v1.30.3
	I0804 00:37:24.528091   73669 api_server.go:131] duration metric: took 5.901784ms to wait for apiserver health ...
	I0804 00:37:24.528099   73669 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:37:24.706309   73669 system_pods.go:59] 8 kube-system pods found
	I0804 00:37:24.706342   73669 system_pods.go:61] "coredns-7db6d8ff4d-zq74b" [925706d8-90d6-4d5b-97f7-b8dde5cf36b7] Running
	I0804 00:37:24.706346   73669 system_pods.go:61] "etcd-kindnet-159277" [f6b5d3fa-e196-4ccd-beb3-bc3d983de319] Running
	I0804 00:37:24.706350   73669 system_pods.go:61] "kindnet-hsnjd" [f9148e5f-c4e1-4471-91a9-66761beb1ffa] Running
	I0804 00:37:24.706354   73669 system_pods.go:61] "kube-apiserver-kindnet-159277" [7a7fec5c-11f6-48b9-a0bf-4b3735fcba5c] Running
	I0804 00:37:24.706361   73669 system_pods.go:61] "kube-controller-manager-kindnet-159277" [76896ccf-f733-4c52-9821-f3a51e29c656] Running
	I0804 00:37:24.706367   73669 system_pods.go:61] "kube-proxy-6jbxl" [3d244344-22bb-4161-a310-9b6b9996b000] Running
	I0804 00:37:24.706372   73669 system_pods.go:61] "kube-scheduler-kindnet-159277" [fb8432b0-64d8-49cc-bef3-565aed50115e] Running
	I0804 00:37:24.706376   73669 system_pods.go:61] "storage-provisioner" [517b2acc-0563-4e02-bc88-331b8a4f2091] Running
	I0804 00:37:24.706388   73669 system_pods.go:74] duration metric: took 178.281757ms to wait for pod list to return data ...
	I0804 00:37:24.706405   73669 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:37:24.901577   73669 default_sa.go:45] found service account: "default"
	I0804 00:37:24.901609   73669 default_sa.go:55] duration metric: took 195.194175ms for default service account to be created ...
	I0804 00:37:24.901620   73669 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:37:25.105350   73669 system_pods.go:86] 8 kube-system pods found
	I0804 00:37:25.105394   73669 system_pods.go:89] "coredns-7db6d8ff4d-zq74b" [925706d8-90d6-4d5b-97f7-b8dde5cf36b7] Running
	I0804 00:37:25.105400   73669 system_pods.go:89] "etcd-kindnet-159277" [f6b5d3fa-e196-4ccd-beb3-bc3d983de319] Running
	I0804 00:37:25.105405   73669 system_pods.go:89] "kindnet-hsnjd" [f9148e5f-c4e1-4471-91a9-66761beb1ffa] Running
	I0804 00:37:25.105409   73669 system_pods.go:89] "kube-apiserver-kindnet-159277" [7a7fec5c-11f6-48b9-a0bf-4b3735fcba5c] Running
	I0804 00:37:25.105413   73669 system_pods.go:89] "kube-controller-manager-kindnet-159277" [76896ccf-f733-4c52-9821-f3a51e29c656] Running
	I0804 00:37:25.105417   73669 system_pods.go:89] "kube-proxy-6jbxl" [3d244344-22bb-4161-a310-9b6b9996b000] Running
	I0804 00:37:25.105420   73669 system_pods.go:89] "kube-scheduler-kindnet-159277" [fb8432b0-64d8-49cc-bef3-565aed50115e] Running
	I0804 00:37:25.105424   73669 system_pods.go:89] "storage-provisioner" [517b2acc-0563-4e02-bc88-331b8a4f2091] Running
	I0804 00:37:25.105430   73669 system_pods.go:126] duration metric: took 203.804704ms to wait for k8s-apps to be running ...
	I0804 00:37:25.105437   73669 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:37:25.105476   73669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:37:25.122219   73669 system_svc.go:56] duration metric: took 16.771683ms WaitForService to wait for kubelet
	I0804 00:37:25.122252   73669 kubeadm.go:582] duration metric: took 20.774731899s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:37:25.122278   73669 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:37:25.302128   73669 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:37:25.302158   73669 node_conditions.go:123] node cpu capacity is 2
	I0804 00:37:25.302169   73669 node_conditions.go:105] duration metric: took 179.886294ms to run NodePressure ...
	I0804 00:37:25.302180   73669 start.go:241] waiting for startup goroutines ...
	I0804 00:37:25.302187   73669 start.go:246] waiting for cluster config update ...
	I0804 00:37:25.302196   73669 start.go:255] writing updated cluster config ...
	I0804 00:37:25.302474   73669 ssh_runner.go:195] Run: rm -f paused
	I0804 00:37:25.364422   73669 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:37:25.366038   73669 out.go:177] * Done! kubectl is now configured to use "kindnet-159277" cluster and "default" namespace by default
	I0804 00:37:24.124969   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:24.625804   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:25.124817   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:25.625325   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:26.125757   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:26.625101   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:27.125055   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:27.625147   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:28.125871   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:28.625770   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:29.125010   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:29.625716   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:30.125686   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:30.625325   73844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:37:30.755973   73844 kubeadm.go:1113] duration metric: took 10.256137462s to wait for elevateKubeSystemPrivileges
	I0804 00:37:30.756019   73844 kubeadm.go:394] duration metric: took 23.447283456s to StartCluster
	I0804 00:37:30.756041   73844 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:37:30.756137   73844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:37:30.757898   73844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:37:30.758144   73844 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:37:30.758253   73844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0804 00:37:30.758295   73844 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:37:30.758395   73844 addons.go:69] Setting storage-provisioner=true in profile "calico-159277"
	I0804 00:37:30.758429   73844 addons.go:234] Setting addon storage-provisioner=true in "calico-159277"
	I0804 00:37:30.758462   73844 config.go:182] Loaded profile config "calico-159277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:37:30.758470   73844 addons.go:69] Setting default-storageclass=true in profile "calico-159277"
	I0804 00:37:30.758491   73844 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-159277"
	I0804 00:37:30.758465   73844 host.go:66] Checking if "calico-159277" exists ...
	I0804 00:37:30.758918   73844 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:37:30.758942   73844 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:37:30.758951   73844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:30.758974   73844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:30.760337   73844 out.go:177] * Verifying Kubernetes components...
	I0804 00:37:30.761948   73844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:37:30.776727   73844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44587
	I0804 00:37:30.777196   73844 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:30.777799   73844 main.go:141] libmachine: Using API Version  1
	I0804 00:37:30.777851   73844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:30.778688   73844 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:30.779280   73844 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:37:30.779312   73844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:30.779958   73844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34137
	I0804 00:37:30.780521   73844 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:30.781028   73844 main.go:141] libmachine: Using API Version  1
	I0804 00:37:30.781057   73844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:30.781413   73844 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:30.781598   73844 main.go:141] libmachine: (calico-159277) Calling .GetState
	I0804 00:37:30.785158   73844 addons.go:234] Setting addon default-storageclass=true in "calico-159277"
	I0804 00:37:30.785196   73844 host.go:66] Checking if "calico-159277" exists ...
	I0804 00:37:30.785520   73844 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:37:30.785545   73844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:37:30.795977   73844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38615
	I0804 00:37:30.796467   73844 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:37:30.796993   73844 main.go:141] libmachine: Using API Version  1
	I0804 00:37:30.797014   73844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:37:30.797378   73844 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:37:30.797590   73844 main.go:141] libmachine: (calico-159277) Calling .GetState
	I0804 00:37:30.799466   73844 main.go:141] libmachine: (calico-159277) Calling .DriverName
	I0804 00:37:30.801568   73844 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.236576587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731852236546369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25a7f3fb-9798-4d62-86d4-c8431db9921c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.237645085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a8fd3b0-0a75-4c2e-820a-6cdfd7bc1fe2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.237721837Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a8fd3b0-0a75-4c2e-820a-6cdfd7bc1fe2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.237976505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730571215452955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5714e350b7d3ec8501e537fba968cd32854ab44dd2bc0047b8ddeeba144c84be,PodSandboxId:5a38baa5765133be5a495b625834b0fb776e8732d5a3b0caa1d76245047395e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730550920883010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c62629d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd,PodSandboxId:fb8d88e7d4e578080a0c9996970016a46ba2c16cdd5f8402fde7822a20d85a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730548130296560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8v28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1c179bf-e99a-4b59-b731-dac458e6d6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 45fa8397,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730540358605379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d,PodSandboxId:c09c337b662181fd76cd1123c2d3284f65d1e6922e392d70fb7b8857d4cd41c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730540356741599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zz7fr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e46c77a-ef1c-402d-807b
-8d12b2e17b07,},Annotations:map[string]string{io.kubernetes.container.hash: f5845157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6,PodSandboxId:317b235c3055d1ff6122302eb93c293879f4e52aa54c634243e50be931ca2b7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730535784297938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deef0c779b084ab671cb1
b778374b594,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f,PodSandboxId:b5127a98d6b9381f51fb9df14b0a7a1f26d53fa6d3d428f19585d1c073b9c087,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730535700688189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: ed0fb553a24a63a0aec0b3352959a32c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b,PodSandboxId:5d489d038dfa91b2dafed39c5a4d9a6cdeee0e7e5973760d73cb3c57e7769be6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730535682625570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e60c97373b9bec338962f9277ca078b4,},Annotations:map[string]string{io.kubernetes.container.hash: 1109b6bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37,PodSandboxId:10cc0f79810c2146458bcfd2e2f3cdfc87d9e4177e3d833adad52dcc694f96b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730535620643651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab56790b945f92107bd1638a2fad
4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1b4d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a8fd3b0-0a75-4c2e-820a-6cdfd7bc1fe2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.288002639Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ef4f473-957c-4af0-beb2-8b27529327d2 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.288091872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ef4f473-957c-4af0-beb2-8b27529327d2 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.290607810Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66c7bf72-2c1c-43a6-beb6-69ec79850d4a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.291423791Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731852291388488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66c7bf72-2c1c-43a6-beb6-69ec79850d4a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.292329491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5216c263-2721-487c-a25b-22bb42325831 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.292409969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5216c263-2721-487c-a25b-22bb42325831 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.292704657Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730571215452955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5714e350b7d3ec8501e537fba968cd32854ab44dd2bc0047b8ddeeba144c84be,PodSandboxId:5a38baa5765133be5a495b625834b0fb776e8732d5a3b0caa1d76245047395e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730550920883010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c62629d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd,PodSandboxId:fb8d88e7d4e578080a0c9996970016a46ba2c16cdd5f8402fde7822a20d85a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730548130296560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8v28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1c179bf-e99a-4b59-b731-dac458e6d6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 45fa8397,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730540358605379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d,PodSandboxId:c09c337b662181fd76cd1123c2d3284f65d1e6922e392d70fb7b8857d4cd41c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730540356741599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zz7fr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e46c77a-ef1c-402d-807b
-8d12b2e17b07,},Annotations:map[string]string{io.kubernetes.container.hash: f5845157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6,PodSandboxId:317b235c3055d1ff6122302eb93c293879f4e52aa54c634243e50be931ca2b7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730535784297938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deef0c779b084ab671cb1
b778374b594,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f,PodSandboxId:b5127a98d6b9381f51fb9df14b0a7a1f26d53fa6d3d428f19585d1c073b9c087,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730535700688189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: ed0fb553a24a63a0aec0b3352959a32c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b,PodSandboxId:5d489d038dfa91b2dafed39c5a4d9a6cdeee0e7e5973760d73cb3c57e7769be6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730535682625570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e60c97373b9bec338962f9277ca078b4,},Annotations:map[string]string{io.kubernetes.container.hash: 1109b6bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37,PodSandboxId:10cc0f79810c2146458bcfd2e2f3cdfc87d9e4177e3d833adad52dcc694f96b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730535620643651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab56790b945f92107bd1638a2fad
4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1b4d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5216c263-2721-487c-a25b-22bb42325831 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.344373908Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0324723-2571-440d-8427-874c471fd56c name=/runtime.v1.RuntimeService/Version
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.344469875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0324723-2571-440d-8427-874c471fd56c name=/runtime.v1.RuntimeService/Version
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.345427867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3746bd08-59ee-4834-a3a7-959e8abd1e2e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.345959814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731852345927267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3746bd08-59ee-4834-a3a7-959e8abd1e2e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.346553426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4df01fd6-ac78-4e9f-b475-ccc3e504ab4a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.346605308Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4df01fd6-ac78-4e9f-b475-ccc3e504ab4a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.346856377Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730571215452955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5714e350b7d3ec8501e537fba968cd32854ab44dd2bc0047b8ddeeba144c84be,PodSandboxId:5a38baa5765133be5a495b625834b0fb776e8732d5a3b0caa1d76245047395e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730550920883010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c62629d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd,PodSandboxId:fb8d88e7d4e578080a0c9996970016a46ba2c16cdd5f8402fde7822a20d85a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730548130296560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8v28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1c179bf-e99a-4b59-b731-dac458e6d6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 45fa8397,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730540358605379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d,PodSandboxId:c09c337b662181fd76cd1123c2d3284f65d1e6922e392d70fb7b8857d4cd41c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730540356741599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zz7fr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e46c77a-ef1c-402d-807b
-8d12b2e17b07,},Annotations:map[string]string{io.kubernetes.container.hash: f5845157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6,PodSandboxId:317b235c3055d1ff6122302eb93c293879f4e52aa54c634243e50be931ca2b7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730535784297938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deef0c779b084ab671cb1
b778374b594,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f,PodSandboxId:b5127a98d6b9381f51fb9df14b0a7a1f26d53fa6d3d428f19585d1c073b9c087,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730535700688189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: ed0fb553a24a63a0aec0b3352959a32c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b,PodSandboxId:5d489d038dfa91b2dafed39c5a4d9a6cdeee0e7e5973760d73cb3c57e7769be6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730535682625570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e60c97373b9bec338962f9277ca078b4,},Annotations:map[string]string{io.kubernetes.container.hash: 1109b6bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37,PodSandboxId:10cc0f79810c2146458bcfd2e2f3cdfc87d9e4177e3d833adad52dcc694f96b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730535620643651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab56790b945f92107bd1638a2fad
4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1b4d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4df01fd6-ac78-4e9f-b475-ccc3e504ab4a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.384033323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f359b8a3-a905-46cf-b432-d6ea4f7376d7 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.384195672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f359b8a3-a905-46cf-b432-d6ea4f7376d7 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.386138342Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af01b1d6-78d5-4230-9ea4-f4f5a28558ef name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.386719817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731852386683295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af01b1d6-78d5-4230-9ea4-f4f5a28558ef name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.387605385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d21f14c1-2e89-40fc-933f-76be31ccbd3f name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.387696074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d21f14c1-2e89-40fc-933f-76be31ccbd3f name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:37:32 default-k8s-diff-port-969068 crio[721]: time="2024-08-04 00:37:32.388051566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730571215452955,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5714e350b7d3ec8501e537fba968cd32854ab44dd2bc0047b8ddeeba144c84be,PodSandboxId:5a38baa5765133be5a495b625834b0fb776e8732d5a3b0caa1d76245047395e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730550920883010,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c62629d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd,PodSandboxId:fb8d88e7d4e578080a0c9996970016a46ba2c16cdd5f8402fde7822a20d85a46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730548130296560,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8v28,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1c179bf-e99a-4b59-b731-dac458e6d6aa,},Annotations:map[string]string{io.kubernetes.container.hash: 45fa8397,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02,PodSandboxId:f98cfca649c0801643689f4d48ac632fbcace174d21960c58504eb09c0572d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730540358605379,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c58edb4a-bb0b-4d76-a279-cdcf7e14bd68,},Annotations:map[string]string{io.kubernetes.container.hash: 416c07b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d,PodSandboxId:c09c337b662181fd76cd1123c2d3284f65d1e6922e392d70fb7b8857d4cd41c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730540356741599,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zz7fr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e46c77a-ef1c-402d-807b
-8d12b2e17b07,},Annotations:map[string]string{io.kubernetes.container.hash: f5845157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6,PodSandboxId:317b235c3055d1ff6122302eb93c293879f4e52aa54c634243e50be931ca2b7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730535784297938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deef0c779b084ab671cb1
b778374b594,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f,PodSandboxId:b5127a98d6b9381f51fb9df14b0a7a1f26d53fa6d3d428f19585d1c073b9c087,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730535700688189,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: ed0fb553a24a63a0aec0b3352959a32c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b,PodSandboxId:5d489d038dfa91b2dafed39c5a4d9a6cdeee0e7e5973760d73cb3c57e7769be6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730535682625570,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: e60c97373b9bec338962f9277ca078b4,},Annotations:map[string]string{io.kubernetes.container.hash: 1109b6bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37,PodSandboxId:10cc0f79810c2146458bcfd2e2f3cdfc87d9e4177e3d833adad52dcc694f96b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730535620643651,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-969068,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ab56790b945f92107bd1638a2fad
4b,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1b4d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d21f14c1-2e89-40fc-933f-76be31ccbd3f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	34bf0e9504879       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   f98cfca649c08       storage-provisioner
	5714e350b7d3e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   5a38baa576513       busybox
	5cf9a1c37ebd1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   fb8d88e7d4e57       coredns-7db6d8ff4d-b8v28
	53cb13593bed6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   f98cfca649c08       storage-provisioner
	572acf711df5e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      21 minutes ago      Running             kube-proxy                1                   c09c337b66218       kube-proxy-zz7fr
	11c7eacd29c36       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      21 minutes ago      Running             kube-scheduler            1                   317b235c3055d       kube-scheduler-default-k8s-diff-port-969068
	f021cd4986aa6       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      21 minutes ago      Running             kube-controller-manager   1                   b5127a98d6b93       kube-controller-manager-default-k8s-diff-port-969068
	0b0897d8c61e8       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      21 minutes ago      Running             kube-apiserver            1                   5d489d038dfa9       kube-apiserver-default-k8s-diff-port-969068
	7b181ffd7672a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   10cc0f79810c2       etcd-default-k8s-diff-port-969068
	
	
	==> coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37873 - 45817 "HINFO IN 5416323336611825304.3429816356777871689. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009957744s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-969068
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-969068
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=default-k8s-diff-port-969068
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T00_08_14_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:08:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-969068
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:37:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:36:33 +0000   Sun, 04 Aug 2024 00:08:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:36:33 +0000   Sun, 04 Aug 2024 00:08:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:36:33 +0000   Sun, 04 Aug 2024 00:08:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:36:33 +0000   Sun, 04 Aug 2024 00:15:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    default-k8s-diff-port-969068
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1731be18e0dd44ebb52e79b8fbffcd93
	  System UUID:                1731be18-e0dd-44eb-b52e-79b8fbffcd93
	  Boot ID:                    ae2bf9db-2992-49a8-8008-f8c73d0c354b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-b8v28                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-969068                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-969068             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-969068    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-zz7fr                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-969068             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-646qm                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-969068 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-969068 event: Registered Node default-k8s-diff-port-969068 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-969068 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-969068 event: Registered Node default-k8s-diff-port-969068 in Controller
	
	
	==> dmesg <==
	[Aug 4 00:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054847] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039815] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.866982] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.578796] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.616932] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.825084] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.065143] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070191] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.213654] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.132525] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.368842] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[  +4.956402] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +0.062762] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.254526] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +5.646814] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.508736] systemd-fstab-generator[1559]: Ignoring "noauto" option for root device
	[  +1.251637] kauditd_printk_skb: 62 callbacks suppressed
	[  +9.486368] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] <==
	{"level":"warn","ts":"2024-08-04T00:34:24.692543Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:34:24.356141Z","time spent":"336.307328ms","remote":"127.0.0.1:52896","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1545 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-04T00:34:24.692013Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.429817ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T00:34:24.692956Z","caller":"traceutil/trace.go:171","msg":"trace[1041914598] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1548; }","duration":"237.451195ms","start":"2024-08-04T00:34:24.455494Z","end":"2024-08-04T00:34:24.692945Z","steps":["trace[1041914598] 'agreement among raft nodes before linearized reading'  (duration: 236.43416ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:34:24.692054Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.499573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T00:34:24.693188Z","caller":"traceutil/trace.go:171","msg":"trace[1998338788] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1548; }","duration":"265.64933ms","start":"2024-08-04T00:34:24.427521Z","end":"2024-08-04T00:34:24.69317Z","steps":["trace[1998338788] 'agreement among raft nodes before linearized reading'  (duration: 264.516311ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:35:23.165491Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.639785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-08-04T00:35:23.165683Z","caller":"traceutil/trace.go:171","msg":"trace[861035717] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1592; }","duration":"180.169608ms","start":"2024-08-04T00:35:22.985483Z","end":"2024-08-04T00:35:23.165653Z","steps":["trace[861035717] 'range keys from in-memory index tree'  (duration: 179.403223ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:35:23.357014Z","caller":"traceutil/trace.go:171","msg":"trace[1716040007] transaction","detail":"{read_only:false; response_revision:1593; number_of_response:1; }","duration":"199.023193ms","start":"2024-08-04T00:35:23.157971Z","end":"2024-08-04T00:35:23.356995Z","steps":["trace[1716040007] 'process raft request'  (duration: 198.913231ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:35:23.384538Z","caller":"traceutil/trace.go:171","msg":"trace[1769909971] transaction","detail":"{read_only:false; response_revision:1594; number_of_response:1; }","duration":"214.650093ms","start":"2024-08-04T00:35:23.169866Z","end":"2024-08-04T00:35:23.384517Z","steps":["trace[1769909971] 'process raft request'  (duration: 212.368156ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:35:38.122268Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1363}
	{"level":"info","ts":"2024-08-04T00:35:38.128025Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1363,"took":"5.430275ms","hash":4175205413,"current-db-size-bytes":2617344,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-04T00:35:38.128078Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4175205413,"revision":1363,"compact-revision":1120}
	{"level":"info","ts":"2024-08-04T00:36:14.380613Z","caller":"traceutil/trace.go:171","msg":"trace[1973607082] transaction","detail":"{read_only:false; response_revision:1636; number_of_response:1; }","duration":"408.141454ms","start":"2024-08-04T00:36:13.972447Z","end":"2024-08-04T00:36:14.380589Z","steps":["trace[1973607082] 'process raft request'  (duration: 408.010683ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:36:14.380905Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:36:13.97243Z","time spent":"408.293624ms","remote":"127.0.0.1:52982","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-969068\" mod_revision:1628 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-969068\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-969068\" > >"}
	{"level":"info","ts":"2024-08-04T00:36:15.782689Z","caller":"traceutil/trace.go:171","msg":"trace[753906029] transaction","detail":"{read_only:false; response_revision:1637; number_of_response:1; }","duration":"116.256191ms","start":"2024-08-04T00:36:15.666413Z","end":"2024-08-04T00:36:15.782669Z","steps":["trace[753906029] 'process raft request'  (duration: 116.124373ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:36:37.958292Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.970712ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4444649426301810755 > lease_revoke:<id:3dae911abddcf7ee>","response":"size:27"}
	{"level":"info","ts":"2024-08-04T00:37:06.450388Z","caller":"traceutil/trace.go:171","msg":"trace[1349453185] transaction","detail":"{read_only:false; response_revision:1678; number_of_response:1; }","duration":"226.483013ms","start":"2024-08-04T00:37:06.223883Z","end":"2024-08-04T00:37:06.450366Z","steps":["trace[1349453185] 'process raft request'  (duration: 226.121331ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:37:07.993267Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.090314ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4444649426301810900 > lease_revoke:<id:3dae911abddcf891>","response":"size:27"}
	{"level":"warn","ts":"2024-08-04T00:37:08.584363Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.455795ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T00:37:08.584556Z","caller":"traceutil/trace.go:171","msg":"trace[602932783] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1679; }","duration":"165.680344ms","start":"2024-08-04T00:37:08.418849Z","end":"2024-08-04T00:37:08.584529Z","steps":["trace[602932783] 'range keys from in-memory index tree'  (duration: 165.40637ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:37:08.584428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.632887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-08-04T00:37:08.584995Z","caller":"traceutil/trace.go:171","msg":"trace[789487980] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1679; }","duration":"130.277951ms","start":"2024-08-04T00:37:08.454704Z","end":"2024-08-04T00:37:08.584982Z","steps":["trace[789487980] 'range keys from in-memory index tree'  (duration: 129.494677ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:37:08.584466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.11733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T00:37:08.585348Z","caller":"traceutil/trace.go:171","msg":"trace[1642987674] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1679; }","duration":"130.009763ms","start":"2024-08-04T00:37:08.455327Z","end":"2024-08-04T00:37:08.585336Z","steps":["trace[1642987674] 'range keys from in-memory index tree'  (duration: 129.078142ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:37:28.203315Z","caller":"traceutil/trace.go:171","msg":"trace[1483584588] transaction","detail":"{read_only:false; response_revision:1696; number_of_response:1; }","duration":"220.590089ms","start":"2024-08-04T00:37:27.982706Z","end":"2024-08-04T00:37:28.203296Z","steps":["trace[1483584588] 'process raft request'  (duration: 220.406709ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:37:32 up 22 min,  0 users,  load average: 0.24, 0.25, 0.16
	Linux default-k8s-diff-port-969068 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] <==
	I0804 00:31:40.418993       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:33:40.417917       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:33:40.417996       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0804 00:33:40.418004       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:33:40.419099       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:33:40.419244       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0804 00:33:40.419275       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:35:39.422533       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:35:39.422666       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0804 00:35:40.422978       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:35:40.423122       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0804 00:35:40.423164       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:35:40.423981       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:35:40.424082       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0804 00:35:40.424245       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:36:40.423968       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:36:40.424222       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0804 00:36:40.424272       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:36:40.425551       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:36:40.425716       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0804 00:36:40.425755       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] <==
	I0804 00:32:11.989841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="231.226µs"
	E0804 00:32:25.888039       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:32:26.454223       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:32:55.893858       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:32:56.462278       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:33:25.897929       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:33:26.470338       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:33:55.903852       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:33:56.478486       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:34:25.910018       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:34:26.488925       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:34:55.916016       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:34:56.497521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:35:25.920107       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:35:26.506122       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:35:55.925185       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:35:56.513236       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:36:25.930454       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:36:26.520880       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:36:55.935016       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:36:56.529207       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:37:12.994319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="438.11µs"
	E0804 00:37:25.940930       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:37:26.538466       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:37:28.205918       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="253.735µs"
	
	
	==> kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] <==
	I0804 00:15:40.677126       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:15:40.698470       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.132"]
	I0804 00:15:40.777445       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:15:40.777594       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:15:40.777702       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:15:40.783055       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:15:40.783316       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:15:40.783536       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:15:40.787206       1 config.go:192] "Starting service config controller"
	I0804 00:15:40.787984       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:15:40.788568       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:15:40.788668       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:15:40.789253       1 config.go:319] "Starting node config controller"
	I0804 00:15:40.789351       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:15:40.888974       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:15:40.889075       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:15:40.889526       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] <==
	I0804 00:15:36.857728       1 serving.go:380] Generated self-signed cert in-memory
	W0804 00:15:39.394229       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 00:15:39.394266       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 00:15:39.394276       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 00:15:39.394284       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 00:15:39.437890       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:15:39.438008       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:15:39.448044       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:15:39.448083       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:15:39.448576       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:15:39.449255       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:15:39.549918       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 04 00:34:55 default-k8s-diff-port-969068 kubelet[932]: E0804 00:34:55.976344     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:35:09 default-k8s-diff-port-969068 kubelet[932]: E0804 00:35:09.974921     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:35:24 default-k8s-diff-port-969068 kubelet[932]: E0804 00:35:24.976950     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:35:34 default-k8s-diff-port-969068 kubelet[932]: E0804 00:35:34.991744     932 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:35:34 default-k8s-diff-port-969068 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:35:34 default-k8s-diff-port-969068 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:35:34 default-k8s-diff-port-969068 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:35:34 default-k8s-diff-port-969068 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:35:39 default-k8s-diff-port-969068 kubelet[932]: E0804 00:35:39.975861     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:35:51 default-k8s-diff-port-969068 kubelet[932]: E0804 00:35:51.976357     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:36:02 default-k8s-diff-port-969068 kubelet[932]: E0804 00:36:02.976049     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:36:13 default-k8s-diff-port-969068 kubelet[932]: E0804 00:36:13.975970     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:36:27 default-k8s-diff-port-969068 kubelet[932]: E0804 00:36:27.976247     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:36:34 default-k8s-diff-port-969068 kubelet[932]: E0804 00:36:34.994122     932 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:36:34 default-k8s-diff-port-969068 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:36:34 default-k8s-diff-port-969068 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:36:34 default-k8s-diff-port-969068 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:36:34 default-k8s-diff-port-969068 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:36:42 default-k8s-diff-port-969068 kubelet[932]: E0804 00:36:42.976316     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:36:57 default-k8s-diff-port-969068 kubelet[932]: E0804 00:36:57.993367     932 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 04 00:36:57 default-k8s-diff-port-969068 kubelet[932]: E0804 00:36:57.993446     932 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 04 00:36:57 default-k8s-diff-port-969068 kubelet[932]: E0804 00:36:57.993688     932 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dz29k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-646qm_kube-system(c28af6f2-95c1-44ae-833a-d426ca62a169): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 04 00:36:57 default-k8s-diff-port-969068 kubelet[932]: E0804 00:36:57.993735     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:37:12 default-k8s-diff-port-969068 kubelet[932]: E0804 00:37:12.975898     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	Aug 04 00:37:27 default-k8s-diff-port-969068 kubelet[932]: E0804 00:37:27.975831     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-646qm" podUID="c28af6f2-95c1-44ae-833a-d426ca62a169"
	
	
	==> storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] <==
	I0804 00:16:11.328778       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0804 00:16:11.337366       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0804 00:16:11.337569       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0804 00:16:28.737069       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0804 00:16:28.737277       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-969068_3c0407ad-5d35-410d-833f-6bff51709cbd!
	I0804 00:16:28.738433       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db5a6da5-0284-4a8e-a871-d4eb2be7e069", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-969068_3c0407ad-5d35-410d-833f-6bff51709cbd became leader
	I0804 00:16:28.837887       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-969068_3c0407ad-5d35-410d-833f-6bff51709cbd!
	
	
	==> storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] <==
	I0804 00:15:40.576307       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0804 00:16:10.580730       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-969068 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-646qm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-969068 describe pod metrics-server-569cc877fc-646qm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-969068 describe pod metrics-server-569cc877fc-646qm: exit status 1 (70.080266ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-646qm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-969068 describe pod metrics-server-569cc877fc-646qm: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (502.35s)
E0804 00:39:31.150055   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.crt: no such file or directory
E0804 00:39:40.210624   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (381.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-877598 -n embed-certs-877598
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-04 00:35:50.778599954 +0000 UTC m=+6486.720843999
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-877598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-877598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.564µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-877598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-877598 -n embed-certs-877598
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-877598 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-877598 logs -n 25: (1.268176812s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-969068  | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-877598                 | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-576210             | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-118016                  | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-969068       | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:33 UTC | 04 Aug 24 00:33 UTC |
	| start   | -p newest-cni-836281 --memory=2200 --alsologtostderr   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:33 UTC | 04 Aug 24 00:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-836281             | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:34 UTC | 04 Aug 24 00:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:34 UTC | 04 Aug 24 00:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-836281                  | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:34 UTC | 04 Aug 24 00:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-836281 --memory=2200 --alsologtostderr   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:34 UTC | 04 Aug 24 00:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-836281 image list                           | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	| delete  | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	| start   | -p auto-159277 --memory=3072                           | auto-159277                  | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	| start   | -p kindnet-159277                                      | kindnet-159277               | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:35:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:35:50.898085   73669 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:35:50.898358   73669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:35:50.898369   73669 out.go:304] Setting ErrFile to fd 2...
	I0804 00:35:50.898376   73669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:35:50.898586   73669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:35:50.899177   73669 out.go:298] Setting JSON to false
	I0804 00:35:50.900118   73669 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8295,"bootTime":1722723456,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:35:50.900184   73669 start.go:139] virtualization: kvm guest
	I0804 00:35:50.902693   73669 out.go:177] * [kindnet-159277] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:35:50.904116   73669 notify.go:220] Checking for updates...
	I0804 00:35:50.904154   73669 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:35:50.905507   73669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:35:50.906896   73669 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:35:50.908331   73669 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:35:50.909807   73669 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:35:50.911189   73669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:35:50.913076   73669 config.go:182] Loaded profile config "auto-159277": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:35:50.913216   73669 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:35:50.913323   73669 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:35:50.913500   73669 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:35:50.952561   73669 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 00:35:50.953859   73669 start.go:297] selected driver: kvm2
	I0804 00:35:50.953875   73669 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:35:50.953889   73669 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:35:50.954939   73669 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:35:50.955016   73669 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:35:50.972872   73669 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:35:50.972936   73669 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:35:50.973243   73669 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:35:50.973279   73669 cni.go:84] Creating CNI manager for "kindnet"
	I0804 00:35:50.973285   73669 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0804 00:35:50.973372   73669 start.go:340] cluster config:
	{Name:kindnet-159277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:35:50.973510   73669 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:35:50.975501   73669 out.go:177] * Starting "kindnet-159277" primary control-plane node in "kindnet-159277" cluster
	
	
	==> CRI-O <==
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.381756070Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731751381733385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99632eb8-03f1-4453-8ada-b7e6f5a75e71 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.382399762Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9161b54d-f103-49a1-a606-fe71a1bd161c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.382466764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9161b54d-f103-49a1-a606-fe71a1bd161c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.382691389Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730590803109066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c57112216393de3cb0af5ba4d680f81582061ce6565f1fe2d6f785c1dfe08b6,PodSandboxId:7a1dd3f30cd5de949d224e9c84db6f4c3e6efd08138982ab8c1c7e7acd1621b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730570889029060,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6695481-0ca0-446c-b491-4547368cc051,},Annotations:map[string]string{io.kubernetes.container.hash: a09c37b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c,PodSandboxId:0540bff7659810128faae4cbbecdc9f03ae377a30328b75b9b66984076a0b82e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730567588074137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7gbcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf46b6f-da6d-4d8a-9b91-6c11f5225072,},Annotations:map[string]string{io.kubernetes.container.hash: 52a6f937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730560082175160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b,PodSandboxId:1d6379cc912f2b07244de2807b66cea9dd017b7b54194515b1a2e70c30a46ed2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730560028738545,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wk8zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2637a235-d0b5-46f3-bbad-ac7386ce6
1c7,},Annotations:map[string]string{io.kubernetes.container.hash: 52aa126e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc,PodSandboxId:e2fac095f10c13e8ee1fa8a05f391fb51935405646025421ca9ad88f05600679,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730555336246146,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 696fa13e27497b0cd143575077a4c241,},Annotations:map[string]string{io.kub
ernetes.container.hash: b0fd39c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163,PodSandboxId:f34e54e96c7547a5ca6ec74bc86f23d27376fc06bf38c9b3cdcaa1002e7e15df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730555268081158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d702c4a5848aa0880624d62984698a,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: c7c255f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac,PodSandboxId:26709f1531df569a335eec36b159f717f679f6b463fddf1010073c59da95e882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730555251303152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e9db6afa424e7201fe478e5d027be3a,},Annotations:map[string]string{io.kubernetes.container.hash:
7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12,PodSandboxId:8c55ed6a349659c4b2c6c01bdc56cdfb85e021dcc9262e5e372ac765152d6f82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730555227630918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf045931b294cba33c8aecb9fc5fc6c7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9161b54d-f103-49a1-a606-fe71a1bd161c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.433118897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f51111b-9b4c-4753-b412-51125a73eb85 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.433192887Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f51111b-9b4c-4753-b412-51125a73eb85 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.434652172Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20e53b94-7e28-4152-90f4-5dea08a6c5b8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.435255957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731751435224578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20e53b94-7e28-4152-90f4-5dea08a6c5b8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.435981530Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28559d77-6828-496e-a5d9-e11b352b6a83 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.436061263Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28559d77-6828-496e-a5d9-e11b352b6a83 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.436358378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730590803109066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c57112216393de3cb0af5ba4d680f81582061ce6565f1fe2d6f785c1dfe08b6,PodSandboxId:7a1dd3f30cd5de949d224e9c84db6f4c3e6efd08138982ab8c1c7e7acd1621b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730570889029060,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6695481-0ca0-446c-b491-4547368cc051,},Annotations:map[string]string{io.kubernetes.container.hash: a09c37b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c,PodSandboxId:0540bff7659810128faae4cbbecdc9f03ae377a30328b75b9b66984076a0b82e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730567588074137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7gbcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf46b6f-da6d-4d8a-9b91-6c11f5225072,},Annotations:map[string]string{io.kubernetes.container.hash: 52a6f937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730560082175160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b,PodSandboxId:1d6379cc912f2b07244de2807b66cea9dd017b7b54194515b1a2e70c30a46ed2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730560028738545,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wk8zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2637a235-d0b5-46f3-bbad-ac7386ce6
1c7,},Annotations:map[string]string{io.kubernetes.container.hash: 52aa126e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc,PodSandboxId:e2fac095f10c13e8ee1fa8a05f391fb51935405646025421ca9ad88f05600679,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730555336246146,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 696fa13e27497b0cd143575077a4c241,},Annotations:map[string]string{io.kub
ernetes.container.hash: b0fd39c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163,PodSandboxId:f34e54e96c7547a5ca6ec74bc86f23d27376fc06bf38c9b3cdcaa1002e7e15df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730555268081158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d702c4a5848aa0880624d62984698a,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: c7c255f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac,PodSandboxId:26709f1531df569a335eec36b159f717f679f6b463fddf1010073c59da95e882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730555251303152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e9db6afa424e7201fe478e5d027be3a,},Annotations:map[string]string{io.kubernetes.container.hash:
7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12,PodSandboxId:8c55ed6a349659c4b2c6c01bdc56cdfb85e021dcc9262e5e372ac765152d6f82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730555227630918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf045931b294cba33c8aecb9fc5fc6c7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28559d77-6828-496e-a5d9-e11b352b6a83 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.483832902Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b3f1dca-b286-4df9-b60a-624a1b020767 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.483925851Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b3f1dca-b286-4df9-b60a-624a1b020767 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.486033999Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8454dfbd-c353-4888-904e-beef91e722f2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.486440464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731751486418535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8454dfbd-c353-4888-904e-beef91e722f2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.487363996Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d23d823-3c32-4da5-ad77-a58b7667c0a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.487422411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d23d823-3c32-4da5-ad77-a58b7667c0a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.487813836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730590803109066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c57112216393de3cb0af5ba4d680f81582061ce6565f1fe2d6f785c1dfe08b6,PodSandboxId:7a1dd3f30cd5de949d224e9c84db6f4c3e6efd08138982ab8c1c7e7acd1621b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730570889029060,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6695481-0ca0-446c-b491-4547368cc051,},Annotations:map[string]string{io.kubernetes.container.hash: a09c37b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c,PodSandboxId:0540bff7659810128faae4cbbecdc9f03ae377a30328b75b9b66984076a0b82e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730567588074137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7gbcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf46b6f-da6d-4d8a-9b91-6c11f5225072,},Annotations:map[string]string{io.kubernetes.container.hash: 52a6f937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730560082175160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b,PodSandboxId:1d6379cc912f2b07244de2807b66cea9dd017b7b54194515b1a2e70c30a46ed2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730560028738545,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wk8zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2637a235-d0b5-46f3-bbad-ac7386ce6
1c7,},Annotations:map[string]string{io.kubernetes.container.hash: 52aa126e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc,PodSandboxId:e2fac095f10c13e8ee1fa8a05f391fb51935405646025421ca9ad88f05600679,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730555336246146,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 696fa13e27497b0cd143575077a4c241,},Annotations:map[string]string{io.kub
ernetes.container.hash: b0fd39c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163,PodSandboxId:f34e54e96c7547a5ca6ec74bc86f23d27376fc06bf38c9b3cdcaa1002e7e15df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730555268081158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d702c4a5848aa0880624d62984698a,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: c7c255f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac,PodSandboxId:26709f1531df569a335eec36b159f717f679f6b463fddf1010073c59da95e882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730555251303152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e9db6afa424e7201fe478e5d027be3a,},Annotations:map[string]string{io.kubernetes.container.hash:
7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12,PodSandboxId:8c55ed6a349659c4b2c6c01bdc56cdfb85e021dcc9262e5e372ac765152d6f82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730555227630918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf045931b294cba33c8aecb9fc5fc6c7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d23d823-3c32-4da5-ad77-a58b7667c0a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.529251031Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53e8888d-c8df-488f-b70b-8b7310e7314b name=/runtime.v1.RuntimeService/Version
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.529335787Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53e8888d-c8df-488f-b70b-8b7310e7314b name=/runtime.v1.RuntimeService/Version
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.530464274Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d97992af-31c7-4fb9-bc3e-b97010f1ceff name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.530936087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731751530914596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d97992af-31c7-4fb9-bc3e-b97010f1ceff name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.531639745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9aaaafa6-e844-4017-89a5-7c128aac1e68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.531758154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9aaaafa6-e844-4017-89a5-7c128aac1e68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:51 embed-certs-877598 crio[726]: time="2024-08-04 00:35:51.531951230Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730590803109066,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c57112216393de3cb0af5ba4d680f81582061ce6565f1fe2d6f785c1dfe08b6,PodSandboxId:7a1dd3f30cd5de949d224e9c84db6f4c3e6efd08138982ab8c1c7e7acd1621b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722730570889029060,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b6695481-0ca0-446c-b491-4547368cc051,},Annotations:map[string]string{io.kubernetes.container.hash: a09c37b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c,PodSandboxId:0540bff7659810128faae4cbbecdc9f03ae377a30328b75b9b66984076a0b82e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730567588074137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7gbcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bf46b6f-da6d-4d8a-9b91-6c11f5225072,},Annotations:map[string]string{io.kubernetes.container.hash: 52a6f937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c,PodSandboxId:ecece52031ec1653cbfe2682b6046345bb3d08fbeb6317a6222527e3884d5d7c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730560082175160,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
373a00e8-1604-4d33-a4aa-95d3a0caf930,},Annotations:map[string]string{io.kubernetes.container.hash: ab56f07b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b,PodSandboxId:1d6379cc912f2b07244de2807b66cea9dd017b7b54194515b1a2e70c30a46ed2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722730560028738545,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wk8zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2637a235-d0b5-46f3-bbad-ac7386ce6
1c7,},Annotations:map[string]string{io.kubernetes.container.hash: 52aa126e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc,PodSandboxId:e2fac095f10c13e8ee1fa8a05f391fb51935405646025421ca9ad88f05600679,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722730555336246146,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 696fa13e27497b0cd143575077a4c241,},Annotations:map[string]string{io.kub
ernetes.container.hash: b0fd39c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163,PodSandboxId:f34e54e96c7547a5ca6ec74bc86f23d27376fc06bf38c9b3cdcaa1002e7e15df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722730555268081158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73d702c4a5848aa0880624d62984698a,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: c7c255f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac,PodSandboxId:26709f1531df569a335eec36b159f717f679f6b463fddf1010073c59da95e882,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722730555251303152,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e9db6afa424e7201fe478e5d027be3a,},Annotations:map[string]string{io.kubernetes.container.hash:
7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12,PodSandboxId:8c55ed6a349659c4b2c6c01bdc56cdfb85e021dcc9262e5e372ac765152d6f82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722730555227630918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-877598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf045931b294cba33c8aecb9fc5fc6c7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9aaaafa6-e844-4017-89a5-7c128aac1e68 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5820e4bb2538f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   ecece52031ec1       storage-provisioner
	8c57112216393       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   7a1dd3f30cd5d       busybox
	102bbb96ee07a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   0540bff765981       coredns-7db6d8ff4d-7gbcf
	b4591fddfa08b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   ecece52031ec1       storage-provisioner
	08432bdee33dc       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      19 minutes ago      Running             kube-proxy                1                   1d6379cc912f2       kube-proxy-wk8zf
	7327ad855d4f6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      19 minutes ago      Running             etcd                      1                   e2fac095f10c1       etcd-embed-certs-877598
	d044ac1fa318f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      19 minutes ago      Running             kube-apiserver            1                   f34e54e96c754       kube-apiserver-embed-certs-877598
	5cdb842231bc7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      19 minutes ago      Running             kube-scheduler            1                   26709f1531df5       kube-scheduler-embed-certs-877598
	d7780d9d7ff2f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      19 minutes ago      Running             kube-controller-manager   1                   8c55ed6a34965       kube-controller-manager-embed-certs-877598
	
	
	==> coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56021 - 45716 "HINFO IN 4793388100201839205.6480537112018857910. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01519527s
	
	
	==> describe nodes <==
	Name:               embed-certs-877598
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-877598
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=embed-certs-877598
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T00_06_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:06:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-877598
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:35:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:31:47 +0000   Sun, 04 Aug 2024 00:06:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:31:47 +0000   Sun, 04 Aug 2024 00:06:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:31:47 +0000   Sun, 04 Aug 2024 00:06:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:31:47 +0000   Sun, 04 Aug 2024 00:16:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.140
	  Hostname:    embed-certs-877598
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d518e0e244d4c3bb6414a29d58c2ba9
	  System UUID:                9d518e0e-244d-4c3b-b641-4a29d58c2ba9
	  Boot ID:                    f2fd7776-b47a-43b1-9475-185f492b3df2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-7gbcf                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-877598                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-877598             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-877598    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-wk8zf                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-877598             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-hbcm9               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-877598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-877598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-877598 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-877598 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-877598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-877598 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node embed-certs-877598 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-877598 event: Registered Node embed-certs-877598 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-877598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-877598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-877598 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-877598 event: Registered Node embed-certs-877598 in Controller
	
	
	==> dmesg <==
	[Aug 4 00:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063188] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.051450] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.295183] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.731580] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.444155] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.989590] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.065771] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064255] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.195185] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.122052] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.294854] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.645987] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.059555] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.132092] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +5.708507] kauditd_printk_skb: 97 callbacks suppressed
	[Aug 4 00:16] systemd-fstab-generator[1532]: Ignoring "noauto" option for root device
	[  +1.782019] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.316081] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] <==
	{"level":"info","ts":"2024-08-04T00:25:57.446846Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":850,"took":"9.671974ms","hash":2058986012,"current-db-size-bytes":2215936,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2215936,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-04T00:25:57.446976Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2058986012,"revision":850,"compact-revision":-1}
	{"level":"info","ts":"2024-08-04T00:30:57.443066Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1092}
	{"level":"info","ts":"2024-08-04T00:30:57.446502Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1092,"took":"3.117583ms","hash":2041292269,"current-db-size-bytes":2215936,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1122304,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2024-08-04T00:30:57.446603Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2041292269,"revision":1092,"compact-revision":850}
	{"level":"info","ts":"2024-08-04T00:35:22.323376Z","caller":"traceutil/trace.go:171","msg":"trace[1716425408] linearizableReadLoop","detail":"{readStateIndex:1818; appliedIndex:1817; }","duration":"294.090684ms","start":"2024-08-04T00:35:22.029241Z","end":"2024-08-04T00:35:22.323331Z","steps":["trace[1716425408] 'read index received'  (duration: 293.895889ms)","trace[1716425408] 'applied index is now lower than readState.Index'  (duration: 194.101µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-04T00:35:22.323643Z","caller":"traceutil/trace.go:171","msg":"trace[146301058] transaction","detail":"{read_only:false; response_revision:1549; number_of_response:1; }","duration":"644.795691ms","start":"2024-08-04T00:35:21.678833Z","end":"2024-08-04T00:35:22.323629Z","steps":["trace[146301058] 'process raft request'  (duration: 644.340609ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:35:22.324183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.001431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T00:35:22.324368Z","caller":"traceutil/trace.go:171","msg":"trace[1503415663] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1549; }","duration":"216.238209ms","start":"2024-08-04T00:35:22.108116Z","end":"2024-08-04T00:35:22.324355Z","steps":["trace[1503415663] 'agreement among raft nodes before linearized reading'  (duration: 215.978794ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:35:22.324752Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.503603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.140\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-08-04T00:35:22.324932Z","caller":"traceutil/trace.go:171","msg":"trace[243131382] range","detail":"{range_begin:/registry/masterleases/192.168.50.140; range_end:; response_count:1; response_revision:1549; }","duration":"295.703594ms","start":"2024-08-04T00:35:22.029216Z","end":"2024-08-04T00:35:22.32492Z","steps":["trace[243131382] 'agreement among raft nodes before linearized reading'  (duration: 295.492976ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:35:22.325286Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:35:21.678814Z","time spent":"644.861618ms","remote":"127.0.0.1:54340","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-877598\" mod_revision:1541 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-877598\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-877598\" > >"}
	{"level":"info","ts":"2024-08-04T00:35:22.664802Z","caller":"traceutil/trace.go:171","msg":"trace[1684036242] linearizableReadLoop","detail":"{readStateIndex:1819; appliedIndex:1818; }","duration":"292.052888ms","start":"2024-08-04T00:35:22.372705Z","end":"2024-08-04T00:35:22.664758Z","steps":["trace[1684036242] 'read index received'  (duration: 196.130775ms)","trace[1684036242] 'applied index is now lower than readState.Index'  (duration: 95.920938ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-04T00:35:22.665386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.685389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:611"}
	{"level":"info","ts":"2024-08-04T00:35:22.665435Z","caller":"traceutil/trace.go:171","msg":"trace[609793397] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1549; }","duration":"292.768879ms","start":"2024-08-04T00:35:22.372654Z","end":"2024-08-04T00:35:22.665423Z","steps":["trace[609793397] 'agreement among raft nodes before linearized reading'  (duration: 292.632232ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:35:22.665259Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:35:22.326409Z","time spent":"338.846241ms","remote":"127.0.0.1:54144","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-08-04T00:35:22.665744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.076165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T00:35:22.665792Z","caller":"traceutil/trace.go:171","msg":"trace[817278797] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1549; }","duration":"173.228476ms","start":"2024-08-04T00:35:22.492553Z","end":"2024-08-04T00:35:22.665781Z","steps":["trace[817278797] 'agreement among raft nodes before linearized reading'  (duration: 173.16516ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:35:23.132721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.381488ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4603682788148356971 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1548 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:522 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-04T00:35:23.132817Z","caller":"traceutil/trace.go:171","msg":"trace[889964521] linearizableReadLoop","detail":"{readStateIndex:1821; appliedIndex:1820; }","duration":"431.037124ms","start":"2024-08-04T00:35:22.701766Z","end":"2024-08-04T00:35:23.132803Z","steps":["trace[889964521] 'read index received'  (duration: 207.271701ms)","trace[889964521] 'applied index is now lower than readState.Index'  (duration: 223.764293ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-04T00:35:23.13303Z","caller":"traceutil/trace.go:171","msg":"trace[396577746] transaction","detail":"{read_only:false; response_revision:1551; number_of_response:1; }","duration":"462.842292ms","start":"2024-08-04T00:35:22.670172Z","end":"2024-08-04T00:35:23.133015Z","steps":["trace[396577746] 'process raft request'  (duration: 238.878321ms)","trace[396577746] 'compare'  (duration: 223.175145ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-04T00:35:23.133128Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:35:22.67016Z","time spent":"462.928465ms","remote":"127.0.0.1:54246","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":595,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1548 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:522 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-04T00:35:23.133365Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"431.596083ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2024-08-04T00:35:23.133405Z","caller":"traceutil/trace.go:171","msg":"trace[1157829494] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:1551; }","duration":"431.661781ms","start":"2024-08-04T00:35:22.701738Z","end":"2024-08-04T00:35:23.133399Z","steps":["trace[1157829494] 'agreement among raft nodes before linearized reading'  (duration: 431.557369ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:35:23.133468Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:35:22.701722Z","time spent":"431.702917ms","remote":"127.0.0.1:54246","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":445,"request content":"key:\"/registry/services/endpoints/default/kubernetes\" "}
	
	
	==> kernel <==
	 00:35:51 up 20 min,  0 users,  load average: 0.15, 0.15, 0.16
	Linux embed-certs-877598 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] <==
	E0804 00:30:59.775800       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0804 00:30:59.776833       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:31:59.776403       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:31:59.776525       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0804 00:31:59.776538       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:31:59.777807       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:31:59.777918       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0804 00:31:59.777928       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:33:59.777499       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:33:59.777945       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0804 00:33:59.777997       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:33:59.778057       1 handler_proxy.go:93] no RequestInfo found in the context
	E0804 00:33:59.778171       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0804 00:33:59.779672       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0804 00:35:22.326091       1 trace.go:236] Trace[1812843641]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c842ff73-bc90-4971-861d-cd19c4350cf8,client:192.168.50.140,api-group:coordination.k8s.io,api-version:v1,name:embed-certs-877598,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/embed-certs-877598,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PUT (04-Aug-2024 00:35:21.677) (total time: 648ms):
	Trace[1812843641]: ["GuaranteedUpdate etcd3" audit-id:c842ff73-bc90-4971-861d-cd19c4350cf8,key:/leases/kube-node-lease/embed-certs-877598,type:*coordination.Lease,resource:leases.coordination.k8s.io 648ms (00:35:21.677)
	Trace[1812843641]:  ---"Txn call completed" 647ms (00:35:22.325)]
	Trace[1812843641]: [648.736598ms] [648.736598ms] END
	I0804 00:35:22.700466       1 trace.go:236] Trace[2007990265]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.140,type:*v1.Endpoints,resource:apiServerIPInfo (04-Aug-2024 00:35:22.028) (total time: 671ms):
	Trace[2007990265]: ---"initial value restored" 296ms (00:35:22.325)
	Trace[2007990265]: ---"Transaction prepared" 340ms (00:35:22.666)
	Trace[2007990265]: [671.632578ms] [671.632578ms] END
	
	
	==> kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] <==
	I0804 00:30:14.896775       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:30:44.291657       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:30:44.904399       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:31:14.296733       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:31:14.912712       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:31:44.303109       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:31:44.921378       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:32:14.308881       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:32:14.929753       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:32:15.597613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="253.828µs"
	I0804 00:32:30.594393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="132.399µs"
	E0804 00:32:44.314413       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:32:44.937352       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:33:14.321077       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:33:14.945351       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:33:44.325734       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:33:44.956751       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:34:14.332231       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:34:14.963623       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:34:44.336752       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:34:44.972773       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:35:14.341468       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:35:14.984811       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:35:44.345881       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0804 00:35:44.994308       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] <==
	I0804 00:16:00.239998       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:16:00.254109       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.140"]
	I0804 00:16:00.293729       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:16:00.293832       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:16:00.293850       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:16:00.298831       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:16:00.299086       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:16:00.299120       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:16:00.300801       1 config.go:192] "Starting service config controller"
	I0804 00:16:00.300830       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:16:00.300864       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:16:00.300867       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:16:00.301227       1 config.go:319] "Starting node config controller"
	I0804 00:16:00.301258       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:16:00.401779       1 shared_informer.go:320] Caches are synced for node config
	I0804 00:16:00.401833       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:16:00.401875       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] <==
	I0804 00:15:56.515055       1 serving.go:380] Generated self-signed cert in-memory
	W0804 00:15:58.746042       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 00:15:58.746170       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 00:15:58.746265       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 00:15:58.746289       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 00:15:58.804051       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:15:58.804093       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:15:58.810907       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:15:58.811137       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:15:58.811157       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:15:58.811179       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:15:58.911853       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 04 00:32:54 embed-certs-877598 kubelet[941]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:32:54 embed-certs-877598 kubelet[941]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:32:56 embed-certs-877598 kubelet[941]: E0804 00:32:56.581054     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:33:10 embed-certs-877598 kubelet[941]: E0804 00:33:10.579459     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:33:22 embed-certs-877598 kubelet[941]: E0804 00:33:22.580838     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:33:37 embed-certs-877598 kubelet[941]: E0804 00:33:37.580306     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:33:51 embed-certs-877598 kubelet[941]: E0804 00:33:51.581146     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:33:54 embed-certs-877598 kubelet[941]: E0804 00:33:54.600345     941 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:33:54 embed-certs-877598 kubelet[941]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:33:54 embed-certs-877598 kubelet[941]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:33:54 embed-certs-877598 kubelet[941]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:33:54 embed-certs-877598 kubelet[941]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:34:06 embed-certs-877598 kubelet[941]: E0804 00:34:06.580774     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:34:21 embed-certs-877598 kubelet[941]: E0804 00:34:21.581720     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:34:35 embed-certs-877598 kubelet[941]: E0804 00:34:35.579593     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:34:48 embed-certs-877598 kubelet[941]: E0804 00:34:48.580948     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:34:54 embed-certs-877598 kubelet[941]: E0804 00:34:54.599317     941 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:34:54 embed-certs-877598 kubelet[941]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:34:54 embed-certs-877598 kubelet[941]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:34:54 embed-certs-877598 kubelet[941]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:34:54 embed-certs-877598 kubelet[941]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:35:01 embed-certs-877598 kubelet[941]: E0804 00:35:01.580373     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:35:15 embed-certs-877598 kubelet[941]: E0804 00:35:15.580025     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:35:30 embed-certs-877598 kubelet[941]: E0804 00:35:30.580160     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	Aug 04 00:35:42 embed-certs-877598 kubelet[941]: E0804 00:35:42.580191     941 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hbcm9" podUID="de6ad720-ed0c-41ea-a1b4-716443257d7e"
	
	
	==> storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] <==
	I0804 00:16:30.907491       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0804 00:16:30.928277       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0804 00:16:30.928509       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0804 00:16:30.942479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0804 00:16:30.942823       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-877598_124f64e3-34ea-493a-a521-c50e141e6a3d!
	I0804 00:16:30.943278       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c5d1ca75-7f2e-4986-ab8d-28a787066197", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-877598_124f64e3-34ea-493a-a521-c50e141e6a3d became leader
	I0804 00:16:31.046703       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-877598_124f64e3-34ea-493a-a521-c50e141e6a3d!
	
	
	==> storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] <==
	I0804 00:16:00.222244       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0804 00:16:30.226800       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-877598 -n embed-certs-877598
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-877598 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-hbcm9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-877598 describe pod metrics-server-569cc877fc-hbcm9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-877598 describe pod metrics-server-569cc877fc-hbcm9: exit status 1 (81.641856ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-hbcm9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-877598 describe pod metrics-server-569cc877fc-hbcm9: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (381.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (370.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-118016 -n no-preload-118016
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-04 00:35:47.743939966 +0000 UTC m=+6483.686184001
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-118016 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-118016 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.864µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-118016 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118016 -n no-preload-118016
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-118016 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-118016 logs -n 25: (1.269357252s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-576210        | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-969068  | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-877598                 | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-576210             | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-118016                  | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-969068       | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:33 UTC | 04 Aug 24 00:33 UTC |
	| start   | -p newest-cni-836281 --memory=2200 --alsologtostderr   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:33 UTC | 04 Aug 24 00:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-836281             | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:34 UTC | 04 Aug 24 00:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:34 UTC | 04 Aug 24 00:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-836281                  | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:34 UTC | 04 Aug 24 00:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-836281 --memory=2200 --alsologtostderr   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:34 UTC | 04 Aug 24 00:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-836281 image list                           | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	| delete  | -p newest-cni-836281                                   | newest-cni-836281            | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC | 04 Aug 24 00:35 UTC |
	| start   | -p auto-159277 --memory=3072                           | auto-159277                  | jenkins | v1.33.1 | 04 Aug 24 00:35 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:35:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:35:42.011026   73264 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:35:42.011114   73264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:35:42.011118   73264 out.go:304] Setting ErrFile to fd 2...
	I0804 00:35:42.011123   73264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:35:42.011314   73264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:35:42.012061   73264 out.go:298] Setting JSON to false
	I0804 00:35:42.013770   73264 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8286,"bootTime":1722723456,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:35:42.013838   73264 start.go:139] virtualization: kvm guest
	I0804 00:35:42.015966   73264 out.go:177] * [auto-159277] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:35:42.017804   73264 notify.go:220] Checking for updates...
	I0804 00:35:42.017836   73264 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:35:42.019252   73264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:35:42.020574   73264 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:35:42.021889   73264 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:35:42.023124   73264 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:35:42.024293   73264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:35:42.025793   73264 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:35:42.025927   73264 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:35:42.026043   73264 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:35:42.026202   73264 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:35:42.064315   73264 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 00:35:42.065542   73264 start.go:297] selected driver: kvm2
	I0804 00:35:42.065557   73264 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:35:42.065568   73264 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:35:42.066262   73264 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:35:42.066349   73264 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:35:42.082329   73264 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:35:42.082372   73264 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:35:42.082647   73264 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:35:42.082722   73264 cni.go:84] Creating CNI manager for ""
	I0804 00:35:42.082735   73264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:35:42.082745   73264 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 00:35:42.082794   73264 start.go:340] cluster config:
	{Name:auto-159277 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:35:42.082909   73264 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:35:42.084958   73264 out.go:177] * Starting "auto-159277" primary control-plane node in "auto-159277" cluster
	I0804 00:35:42.086388   73264 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:35:42.086435   73264 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:35:42.086448   73264 cache.go:56] Caching tarball of preloaded images
	I0804 00:35:42.086530   73264 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:35:42.086551   73264 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:35:42.086635   73264 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/config.json ...
	I0804 00:35:42.086651   73264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/auto-159277/config.json: {Name:mk472f8a3c964ed33d6a13ac0b17e75b972d9932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:35:42.086811   73264 start.go:360] acquireMachinesLock for auto-159277: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:35:42.086846   73264 start.go:364] duration metric: took 19.08µs to acquireMachinesLock for "auto-159277"
	I0804 00:35:42.086870   73264 start.go:93] Provisioning new machine with config: &{Name:auto-159277 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-159277 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:35:42.086927   73264 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 00:35:42.088456   73264 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0804 00:35:42.088571   73264 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:35:42.088608   73264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:35:42.103048   73264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0804 00:35:42.103571   73264 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:35:42.104089   73264 main.go:141] libmachine: Using API Version  1
	I0804 00:35:42.104109   73264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:35:42.104411   73264 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:35:42.104587   73264 main.go:141] libmachine: (auto-159277) Calling .GetMachineName
	I0804 00:35:42.104759   73264 main.go:141] libmachine: (auto-159277) Calling .DriverName
	I0804 00:35:42.104934   73264 start.go:159] libmachine.API.Create for "auto-159277" (driver="kvm2")
	I0804 00:35:42.104969   73264 client.go:168] LocalClient.Create starting
	I0804 00:35:42.104997   73264 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem
	I0804 00:35:42.105025   73264 main.go:141] libmachine: Decoding PEM data...
	I0804 00:35:42.105042   73264 main.go:141] libmachine: Parsing certificate...
	I0804 00:35:42.105102   73264 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem
	I0804 00:35:42.105121   73264 main.go:141] libmachine: Decoding PEM data...
	I0804 00:35:42.105139   73264 main.go:141] libmachine: Parsing certificate...
	I0804 00:35:42.105155   73264 main.go:141] libmachine: Running pre-create checks...
	I0804 00:35:42.105163   73264 main.go:141] libmachine: (auto-159277) Calling .PreCreateCheck
	I0804 00:35:42.105578   73264 main.go:141] libmachine: (auto-159277) Calling .GetConfigRaw
	I0804 00:35:42.105962   73264 main.go:141] libmachine: Creating machine...
	I0804 00:35:42.105974   73264 main.go:141] libmachine: (auto-159277) Calling .Create
	I0804 00:35:42.106132   73264 main.go:141] libmachine: (auto-159277) Creating KVM machine...
	I0804 00:35:42.107567   73264 main.go:141] libmachine: (auto-159277) DBG | found existing default KVM network
	I0804 00:35:42.108830   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:42.108684   73287 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b7:12:58} reservation:<nil>}
	I0804 00:35:42.109767   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:42.109686   73287 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:df:ff:12} reservation:<nil>}
	I0804 00:35:42.110590   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:42.110496   73287 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:d7:ed:e8} reservation:<nil>}
	I0804 00:35:42.111677   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:42.111596   73287 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030afc0}
	I0804 00:35:42.111709   73264 main.go:141] libmachine: (auto-159277) DBG | created network xml: 
	I0804 00:35:42.111722   73264 main.go:141] libmachine: (auto-159277) DBG | <network>
	I0804 00:35:42.111735   73264 main.go:141] libmachine: (auto-159277) DBG |   <name>mk-auto-159277</name>
	I0804 00:35:42.111743   73264 main.go:141] libmachine: (auto-159277) DBG |   <dns enable='no'/>
	I0804 00:35:42.111754   73264 main.go:141] libmachine: (auto-159277) DBG |   
	I0804 00:35:42.111764   73264 main.go:141] libmachine: (auto-159277) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0804 00:35:42.111772   73264 main.go:141] libmachine: (auto-159277) DBG |     <dhcp>
	I0804 00:35:42.111785   73264 main.go:141] libmachine: (auto-159277) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0804 00:35:42.111795   73264 main.go:141] libmachine: (auto-159277) DBG |     </dhcp>
	I0804 00:35:42.111803   73264 main.go:141] libmachine: (auto-159277) DBG |   </ip>
	I0804 00:35:42.111817   73264 main.go:141] libmachine: (auto-159277) DBG |   
	I0804 00:35:42.111836   73264 main.go:141] libmachine: (auto-159277) DBG | </network>
	I0804 00:35:42.111850   73264 main.go:141] libmachine: (auto-159277) DBG | 
	I0804 00:35:42.117864   73264 main.go:141] libmachine: (auto-159277) DBG | trying to create private KVM network mk-auto-159277 192.168.72.0/24...
	I0804 00:35:42.196565   73264 main.go:141] libmachine: (auto-159277) DBG | private KVM network mk-auto-159277 192.168.72.0/24 created
	I0804 00:35:42.196597   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:42.196525   73287 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:35:42.196610   73264 main.go:141] libmachine: (auto-159277) Setting up store path in /home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277 ...
	I0804 00:35:42.196627   73264 main.go:141] libmachine: (auto-159277) Building disk image from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:35:42.196643   73264 main.go:141] libmachine: (auto-159277) Downloading /home/jenkins/minikube-integration/19364-9607/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 00:35:42.433080   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:42.432920   73287 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277/id_rsa...
	I0804 00:35:42.494592   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:42.494462   73287 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277/auto-159277.rawdisk...
	I0804 00:35:42.494617   73264 main.go:141] libmachine: (auto-159277) DBG | Writing magic tar header
	I0804 00:35:42.494628   73264 main.go:141] libmachine: (auto-159277) DBG | Writing SSH key tar header
	I0804 00:35:42.494706   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:42.494638   73287 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277 ...
	I0804 00:35:42.494796   73264 main.go:141] libmachine: (auto-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277
	I0804 00:35:42.494811   73264 main.go:141] libmachine: (auto-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube/machines
	I0804 00:35:42.494825   73264 main.go:141] libmachine: (auto-159277) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277 (perms=drwx------)
	I0804 00:35:42.494835   73264 main.go:141] libmachine: (auto-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:35:42.494847   73264 main.go:141] libmachine: (auto-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-9607
	I0804 00:35:42.494864   73264 main.go:141] libmachine: (auto-159277) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 00:35:42.494880   73264 main.go:141] libmachine: (auto-159277) DBG | Checking permissions on dir: /home/jenkins
	I0804 00:35:42.494890   73264 main.go:141] libmachine: (auto-159277) DBG | Checking permissions on dir: /home
	I0804 00:35:42.494914   73264 main.go:141] libmachine: (auto-159277) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube/machines (perms=drwxr-xr-x)
	I0804 00:35:42.494934   73264 main.go:141] libmachine: (auto-159277) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607/.minikube (perms=drwxr-xr-x)
	I0804 00:35:42.494947   73264 main.go:141] libmachine: (auto-159277) Setting executable bit set on /home/jenkins/minikube-integration/19364-9607 (perms=drwxrwxr-x)
	I0804 00:35:42.494962   73264 main.go:141] libmachine: (auto-159277) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 00:35:42.494975   73264 main.go:141] libmachine: (auto-159277) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 00:35:42.494983   73264 main.go:141] libmachine: (auto-159277) DBG | Skipping /home - not owner
	I0804 00:35:42.494994   73264 main.go:141] libmachine: (auto-159277) Creating domain...
	I0804 00:35:42.497367   73264 main.go:141] libmachine: (auto-159277) define libvirt domain using xml: 
	I0804 00:35:42.497392   73264 main.go:141] libmachine: (auto-159277) <domain type='kvm'>
	I0804 00:35:42.497404   73264 main.go:141] libmachine: (auto-159277)   <name>auto-159277</name>
	I0804 00:35:42.497424   73264 main.go:141] libmachine: (auto-159277)   <memory unit='MiB'>3072</memory>
	I0804 00:35:42.497435   73264 main.go:141] libmachine: (auto-159277)   <vcpu>2</vcpu>
	I0804 00:35:42.497446   73264 main.go:141] libmachine: (auto-159277)   <features>
	I0804 00:35:42.497473   73264 main.go:141] libmachine: (auto-159277)     <acpi/>
	I0804 00:35:42.497485   73264 main.go:141] libmachine: (auto-159277)     <apic/>
	I0804 00:35:42.497493   73264 main.go:141] libmachine: (auto-159277)     <pae/>
	I0804 00:35:42.497499   73264 main.go:141] libmachine: (auto-159277)     
	I0804 00:35:42.497510   73264 main.go:141] libmachine: (auto-159277)   </features>
	I0804 00:35:42.497519   73264 main.go:141] libmachine: (auto-159277)   <cpu mode='host-passthrough'>
	I0804 00:35:42.497527   73264 main.go:141] libmachine: (auto-159277)   
	I0804 00:35:42.497541   73264 main.go:141] libmachine: (auto-159277)   </cpu>
	I0804 00:35:42.497608   73264 main.go:141] libmachine: (auto-159277)   <os>
	I0804 00:35:42.497638   73264 main.go:141] libmachine: (auto-159277)     <type>hvm</type>
	I0804 00:35:42.497650   73264 main.go:141] libmachine: (auto-159277)     <boot dev='cdrom'/>
	I0804 00:35:42.497670   73264 main.go:141] libmachine: (auto-159277)     <boot dev='hd'/>
	I0804 00:35:42.497683   73264 main.go:141] libmachine: (auto-159277)     <bootmenu enable='no'/>
	I0804 00:35:42.497690   73264 main.go:141] libmachine: (auto-159277)   </os>
	I0804 00:35:42.497700   73264 main.go:141] libmachine: (auto-159277)   <devices>
	I0804 00:35:42.497711   73264 main.go:141] libmachine: (auto-159277)     <disk type='file' device='cdrom'>
	I0804 00:35:42.497737   73264 main.go:141] libmachine: (auto-159277)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277/boot2docker.iso'/>
	I0804 00:35:42.497766   73264 main.go:141] libmachine: (auto-159277)       <target dev='hdc' bus='scsi'/>
	I0804 00:35:42.497782   73264 main.go:141] libmachine: (auto-159277)       <readonly/>
	I0804 00:35:42.497788   73264 main.go:141] libmachine: (auto-159277)     </disk>
	I0804 00:35:42.497800   73264 main.go:141] libmachine: (auto-159277)     <disk type='file' device='disk'>
	I0804 00:35:42.497814   73264 main.go:141] libmachine: (auto-159277)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 00:35:42.497831   73264 main.go:141] libmachine: (auto-159277)       <source file='/home/jenkins/minikube-integration/19364-9607/.minikube/machines/auto-159277/auto-159277.rawdisk'/>
	I0804 00:35:42.497840   73264 main.go:141] libmachine: (auto-159277)       <target dev='hda' bus='virtio'/>
	I0804 00:35:42.497846   73264 main.go:141] libmachine: (auto-159277)     </disk>
	I0804 00:35:42.497871   73264 main.go:141] libmachine: (auto-159277)     <interface type='network'>
	I0804 00:35:42.497890   73264 main.go:141] libmachine: (auto-159277)       <source network='mk-auto-159277'/>
	I0804 00:35:42.497923   73264 main.go:141] libmachine: (auto-159277)       <model type='virtio'/>
	I0804 00:35:42.497939   73264 main.go:141] libmachine: (auto-159277)     </interface>
	I0804 00:35:42.497952   73264 main.go:141] libmachine: (auto-159277)     <interface type='network'>
	I0804 00:35:42.497963   73264 main.go:141] libmachine: (auto-159277)       <source network='default'/>
	I0804 00:35:42.497974   73264 main.go:141] libmachine: (auto-159277)       <model type='virtio'/>
	I0804 00:35:42.497984   73264 main.go:141] libmachine: (auto-159277)     </interface>
	I0804 00:35:42.498009   73264 main.go:141] libmachine: (auto-159277)     <serial type='pty'>
	I0804 00:35:42.498023   73264 main.go:141] libmachine: (auto-159277)       <target port='0'/>
	I0804 00:35:42.498034   73264 main.go:141] libmachine: (auto-159277)     </serial>
	I0804 00:35:42.498044   73264 main.go:141] libmachine: (auto-159277)     <console type='pty'>
	I0804 00:35:42.498053   73264 main.go:141] libmachine: (auto-159277)       <target type='serial' port='0'/>
	I0804 00:35:42.498062   73264 main.go:141] libmachine: (auto-159277)     </console>
	I0804 00:35:42.498083   73264 main.go:141] libmachine: (auto-159277)     <rng model='virtio'>
	I0804 00:35:42.498095   73264 main.go:141] libmachine: (auto-159277)       <backend model='random'>/dev/random</backend>
	I0804 00:35:42.498103   73264 main.go:141] libmachine: (auto-159277)     </rng>
	I0804 00:35:42.498108   73264 main.go:141] libmachine: (auto-159277)     
	I0804 00:35:42.498117   73264 main.go:141] libmachine: (auto-159277)     
	I0804 00:35:42.498125   73264 main.go:141] libmachine: (auto-159277)   </devices>
	I0804 00:35:42.498145   73264 main.go:141] libmachine: (auto-159277) </domain>
	I0804 00:35:42.498163   73264 main.go:141] libmachine: (auto-159277) 
	I0804 00:35:42.502233   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:c9:b4:b9 in network default
	I0804 00:35:42.502971   73264 main.go:141] libmachine: (auto-159277) Ensuring networks are active...
	I0804 00:35:42.502987   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:35:42.503920   73264 main.go:141] libmachine: (auto-159277) Ensuring network default is active
	I0804 00:35:42.504285   73264 main.go:141] libmachine: (auto-159277) Ensuring network mk-auto-159277 is active
	I0804 00:35:42.504864   73264 main.go:141] libmachine: (auto-159277) Getting domain xml...
	I0804 00:35:42.505690   73264 main.go:141] libmachine: (auto-159277) Creating domain...
	I0804 00:35:43.759718   73264 main.go:141] libmachine: (auto-159277) Waiting to get IP...
	I0804 00:35:43.760598   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:35:43.761075   73264 main.go:141] libmachine: (auto-159277) DBG | unable to find current IP address of domain auto-159277 in network mk-auto-159277
	I0804 00:35:43.761131   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:43.761059   73287 retry.go:31] will retry after 249.286249ms: waiting for machine to come up
	I0804 00:35:44.012323   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:35:44.012778   73264 main.go:141] libmachine: (auto-159277) DBG | unable to find current IP address of domain auto-159277 in network mk-auto-159277
	I0804 00:35:44.012818   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:44.012727   73287 retry.go:31] will retry after 342.895459ms: waiting for machine to come up
	I0804 00:35:44.357235   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:35:44.357838   73264 main.go:141] libmachine: (auto-159277) DBG | unable to find current IP address of domain auto-159277 in network mk-auto-159277
	I0804 00:35:44.357866   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:44.357795   73287 retry.go:31] will retry after 392.762464ms: waiting for machine to come up
	I0804 00:35:44.752455   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:35:44.752895   73264 main.go:141] libmachine: (auto-159277) DBG | unable to find current IP address of domain auto-159277 in network mk-auto-159277
	I0804 00:35:44.752924   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:44.752877   73287 retry.go:31] will retry after 598.75434ms: waiting for machine to come up
	I0804 00:35:45.353551   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:35:45.354080   73264 main.go:141] libmachine: (auto-159277) DBG | unable to find current IP address of domain auto-159277 in network mk-auto-159277
	I0804 00:35:45.354111   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:45.354044   73287 retry.go:31] will retry after 615.714449ms: waiting for machine to come up
	I0804 00:35:45.971864   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:35:45.972220   73264 main.go:141] libmachine: (auto-159277) DBG | unable to find current IP address of domain auto-159277 in network mk-auto-159277
	I0804 00:35:45.972263   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:45.972188   73287 retry.go:31] will retry after 742.783333ms: waiting for machine to come up
	I0804 00:35:46.715946   73264 main.go:141] libmachine: (auto-159277) DBG | domain auto-159277 has defined MAC address 52:54:00:99:56:51 in network mk-auto-159277
	I0804 00:35:46.716302   73264 main.go:141] libmachine: (auto-159277) DBG | unable to find current IP address of domain auto-159277 in network mk-auto-159277
	I0804 00:35:46.716331   73264 main.go:141] libmachine: (auto-159277) DBG | I0804 00:35:46.716261   73287 retry.go:31] will retry after 1.098558988s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.367686889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731748367663353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2031e0fa-ea83-474a-b50f-1b12847662ff name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.368700556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1c82db3-37ca-4517-887f-feec9fccd75d name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.368842843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1c82db3-37ca-4517-887f-feec9fccd75d name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.369066580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f59b89753e4a597652b4a53f13a681bada0e2629949903713f9344c0c937af6,PodSandboxId:c6f8edd0330f76cc19d763ba486d668768da75a8d36dd24246448b4cc0535cd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730826747654685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fdb5fa-a2e9-4d3d-8149-25720c320d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c65ae645171e407b2813910f0e54146ab1831f0f9130a3a26eec8eaa4ca14f,PodSandboxId:09234c4c7f59230ce583f38b939f8686dbfed08d095901f1150c18bc7fc80621,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826313016176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gg97s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bfbbe9-5051-4674-8b43-f07bfdbc6916,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28455521ad2095446bf081c4170d00e6ccec27f14f667f77e7b788fc2c51c6d2,PodSandboxId:91c2813710adc1ab2d52544e83a4889b923fd40b7448f6fc6a7b6a03b5e9de75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826176482368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lj494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74
baae1c-e4c4-4125-aa9d-aeaac74a6ecd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f25ada05becf94b2699f197afd4b70de6b3211b94217bca2f9b51c476e439b,PodSandboxId:5a3f3a2b20c1b39eb6e6177730d17de94c11462651f8b4da1e43c114f3c79bd6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722730825465105468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4jqng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c254599f-e58d-4d0a-81c9-1c98c0341f26,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9d8868414e3c960bb075bcd99e870e5828071367bb340a06e1dd084313253e,PodSandboxId:a888bbecc6e161ad4f5b5de5ec1dcd8e118d9a0a993576132ee45678a7c0bca6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722730814777255937,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc454595d6fcef8957e46922bb70e36,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea380b4ed6c57516e60c5a8b184a4c25ab0858bcb97c92879ae7acc4bbb3a438,PodSandboxId:7cdbbde7b11a7ef3dff5f02ed5b0c8db247f7ef90f8d8d7af2d4a8470334e28a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722730814708925486,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da969ee0e5a26bedb12514b48871ff47a91af2aacfe243d16021051a9fb1ae8a,PodSandboxId:fbdb61c5a4c0448f4d6eedd48644d25a1adc35f919d5ff446d45af8860473314,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722730814630099645,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd018487a3261c4ff0832bfaea361607,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc100fbc7b932603c43e7d427ec410b9665bb63a0ae251fe56cc1d0233bf678,PodSandboxId:f879716552436bdf74db0a2e7aae72ef3c68ae4cd823112e19dcdd05fdb3bd0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722730814546681246,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2457feb0f26f34eb0d94a1a245199e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b8d7537c15e948e9393b9c0ae8eec10ec494dce9c1d9a41a5e3a904c7c0d8f,PodSandboxId:153678d85c5371837fc6f46f100b86ac02da29e1a92d04af5517e9a4b209245c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722730526193836406,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1c82db3-37ca-4517-887f-feec9fccd75d name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.412313018Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b06c01a-b9bb-453a-84e5-3d1407ed4a26 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.412413261Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b06c01a-b9bb-453a-84e5-3d1407ed4a26 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.414010490Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=985328fd-b97c-48e3-a5c3-b03f37532e9e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.414490154Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731748414463763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=985328fd-b97c-48e3-a5c3-b03f37532e9e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.415132709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e05176a4-2528-4fe5-af3e-b4a4878c44e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.415207517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e05176a4-2528-4fe5-af3e-b4a4878c44e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.415484813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f59b89753e4a597652b4a53f13a681bada0e2629949903713f9344c0c937af6,PodSandboxId:c6f8edd0330f76cc19d763ba486d668768da75a8d36dd24246448b4cc0535cd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730826747654685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fdb5fa-a2e9-4d3d-8149-25720c320d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c65ae645171e407b2813910f0e54146ab1831f0f9130a3a26eec8eaa4ca14f,PodSandboxId:09234c4c7f59230ce583f38b939f8686dbfed08d095901f1150c18bc7fc80621,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826313016176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gg97s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bfbbe9-5051-4674-8b43-f07bfdbc6916,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28455521ad2095446bf081c4170d00e6ccec27f14f667f77e7b788fc2c51c6d2,PodSandboxId:91c2813710adc1ab2d52544e83a4889b923fd40b7448f6fc6a7b6a03b5e9de75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826176482368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lj494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74
baae1c-e4c4-4125-aa9d-aeaac74a6ecd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f25ada05becf94b2699f197afd4b70de6b3211b94217bca2f9b51c476e439b,PodSandboxId:5a3f3a2b20c1b39eb6e6177730d17de94c11462651f8b4da1e43c114f3c79bd6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722730825465105468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4jqng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c254599f-e58d-4d0a-81c9-1c98c0341f26,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9d8868414e3c960bb075bcd99e870e5828071367bb340a06e1dd084313253e,PodSandboxId:a888bbecc6e161ad4f5b5de5ec1dcd8e118d9a0a993576132ee45678a7c0bca6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722730814777255937,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc454595d6fcef8957e46922bb70e36,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea380b4ed6c57516e60c5a8b184a4c25ab0858bcb97c92879ae7acc4bbb3a438,PodSandboxId:7cdbbde7b11a7ef3dff5f02ed5b0c8db247f7ef90f8d8d7af2d4a8470334e28a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722730814708925486,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da969ee0e5a26bedb12514b48871ff47a91af2aacfe243d16021051a9fb1ae8a,PodSandboxId:fbdb61c5a4c0448f4d6eedd48644d25a1adc35f919d5ff446d45af8860473314,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722730814630099645,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd018487a3261c4ff0832bfaea361607,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc100fbc7b932603c43e7d427ec410b9665bb63a0ae251fe56cc1d0233bf678,PodSandboxId:f879716552436bdf74db0a2e7aae72ef3c68ae4cd823112e19dcdd05fdb3bd0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722730814546681246,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2457feb0f26f34eb0d94a1a245199e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b8d7537c15e948e9393b9c0ae8eec10ec494dce9c1d9a41a5e3a904c7c0d8f,PodSandboxId:153678d85c5371837fc6f46f100b86ac02da29e1a92d04af5517e9a4b209245c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722730526193836406,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e05176a4-2528-4fe5-af3e-b4a4878c44e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.455612973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0c6dd27-bf17-4042-96a2-b72df7e19f73 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.455684799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0c6dd27-bf17-4042-96a2-b72df7e19f73 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.457071857Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=102496b7-f636-41e0-970f-1b630b3c8460 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.457572708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731748457374807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=102496b7-f636-41e0-970f-1b630b3c8460 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.458291690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a993a204-7f5b-4941-9200-cc5d16d297ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.458344602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a993a204-7f5b-4941-9200-cc5d16d297ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.458692197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f59b89753e4a597652b4a53f13a681bada0e2629949903713f9344c0c937af6,PodSandboxId:c6f8edd0330f76cc19d763ba486d668768da75a8d36dd24246448b4cc0535cd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730826747654685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fdb5fa-a2e9-4d3d-8149-25720c320d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c65ae645171e407b2813910f0e54146ab1831f0f9130a3a26eec8eaa4ca14f,PodSandboxId:09234c4c7f59230ce583f38b939f8686dbfed08d095901f1150c18bc7fc80621,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826313016176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gg97s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bfbbe9-5051-4674-8b43-f07bfdbc6916,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28455521ad2095446bf081c4170d00e6ccec27f14f667f77e7b788fc2c51c6d2,PodSandboxId:91c2813710adc1ab2d52544e83a4889b923fd40b7448f6fc6a7b6a03b5e9de75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826176482368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lj494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74
baae1c-e4c4-4125-aa9d-aeaac74a6ecd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f25ada05becf94b2699f197afd4b70de6b3211b94217bca2f9b51c476e439b,PodSandboxId:5a3f3a2b20c1b39eb6e6177730d17de94c11462651f8b4da1e43c114f3c79bd6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722730825465105468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4jqng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c254599f-e58d-4d0a-81c9-1c98c0341f26,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9d8868414e3c960bb075bcd99e870e5828071367bb340a06e1dd084313253e,PodSandboxId:a888bbecc6e161ad4f5b5de5ec1dcd8e118d9a0a993576132ee45678a7c0bca6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722730814777255937,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc454595d6fcef8957e46922bb70e36,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea380b4ed6c57516e60c5a8b184a4c25ab0858bcb97c92879ae7acc4bbb3a438,PodSandboxId:7cdbbde7b11a7ef3dff5f02ed5b0c8db247f7ef90f8d8d7af2d4a8470334e28a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722730814708925486,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da969ee0e5a26bedb12514b48871ff47a91af2aacfe243d16021051a9fb1ae8a,PodSandboxId:fbdb61c5a4c0448f4d6eedd48644d25a1adc35f919d5ff446d45af8860473314,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722730814630099645,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd018487a3261c4ff0832bfaea361607,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc100fbc7b932603c43e7d427ec410b9665bb63a0ae251fe56cc1d0233bf678,PodSandboxId:f879716552436bdf74db0a2e7aae72ef3c68ae4cd823112e19dcdd05fdb3bd0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722730814546681246,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2457feb0f26f34eb0d94a1a245199e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b8d7537c15e948e9393b9c0ae8eec10ec494dce9c1d9a41a5e3a904c7c0d8f,PodSandboxId:153678d85c5371837fc6f46f100b86ac02da29e1a92d04af5517e9a4b209245c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722730526193836406,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a993a204-7f5b-4941-9200-cc5d16d297ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.493431526Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb52a256-bfde-4cda-adaa-a4d3192d9d1f name=/runtime.v1.RuntimeService/Version
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.493511952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb52a256-bfde-4cda-adaa-a4d3192d9d1f name=/runtime.v1.RuntimeService/Version
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.495298260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f7939b3-a626-4948-8c97-90ada9cfa595 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.495667592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731748495645428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f7939b3-a626-4948-8c97-90ada9cfa595 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.496500785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b0c36d9-d229-46ed-b866-74c663a33404 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.496555035Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b0c36d9-d229-46ed-b866-74c663a33404 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:35:48 no-preload-118016 crio[723]: time="2024-08-04 00:35:48.496838392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f59b89753e4a597652b4a53f13a681bada0e2629949903713f9344c0c937af6,PodSandboxId:c6f8edd0330f76cc19d763ba486d668768da75a8d36dd24246448b4cc0535cd9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722730826747654685,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07fdb5fa-a2e9-4d3d-8149-25720c320d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c65ae645171e407b2813910f0e54146ab1831f0f9130a3a26eec8eaa4ca14f,PodSandboxId:09234c4c7f59230ce583f38b939f8686dbfed08d095901f1150c18bc7fc80621,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826313016176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gg97s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28bfbbe9-5051-4674-8b43-f07bfdbc6916,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28455521ad2095446bf081c4170d00e6ccec27f14f667f77e7b788fc2c51c6d2,PodSandboxId:91c2813710adc1ab2d52544e83a4889b923fd40b7448f6fc6a7b6a03b5e9de75,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722730826176482368,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lj494,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74
baae1c-e4c4-4125-aa9d-aeaac74a6ecd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f25ada05becf94b2699f197afd4b70de6b3211b94217bca2f9b51c476e439b,PodSandboxId:5a3f3a2b20c1b39eb6e6177730d17de94c11462651f8b4da1e43c114f3c79bd6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:
1722730825465105468,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4jqng,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c254599f-e58d-4d0a-81c9-1c98c0341f26,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9d8868414e3c960bb075bcd99e870e5828071367bb340a06e1dd084313253e,PodSandboxId:a888bbecc6e161ad4f5b5de5ec1dcd8e118d9a0a993576132ee45678a7c0bca6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722730814777255937,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc454595d6fcef8957e46922bb70e36,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea380b4ed6c57516e60c5a8b184a4c25ab0858bcb97c92879ae7acc4bbb3a438,PodSandboxId:7cdbbde7b11a7ef3dff5f02ed5b0c8db247f7ef90f8d8d7af2d4a8470334e28a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722730814708925486,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da969ee0e5a26bedb12514b48871ff47a91af2aacfe243d16021051a9fb1ae8a,PodSandboxId:fbdb61c5a4c0448f4d6eedd48644d25a1adc35f919d5ff446d45af8860473314,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722730814630099645,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd018487a3261c4ff0832bfaea361607,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc100fbc7b932603c43e7d427ec410b9665bb63a0ae251fe56cc1d0233bf678,PodSandboxId:f879716552436bdf74db0a2e7aae72ef3c68ae4cd823112e19dcdd05fdb3bd0f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722730814546681246,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2457feb0f26f34eb0d94a1a245199e57,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b8d7537c15e948e9393b9c0ae8eec10ec494dce9c1d9a41a5e3a904c7c0d8f,PodSandboxId:153678d85c5371837fc6f46f100b86ac02da29e1a92d04af5517e9a4b209245c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722730526193836406,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-118016,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ca92658c07da1ecb6b67e32c5cf2ed0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b0c36d9-d229-46ed-b866-74c663a33404 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5f59b89753e4a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   c6f8edd0330f7       storage-provisioner
	12c65ae645171       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   09234c4c7f592       coredns-6f6b679f8f-gg97s
	28455521ad209       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   91c2813710adc       coredns-6f6b679f8f-lj494
	91f25ada05bec       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   15 minutes ago      Running             kube-proxy                0                   5a3f3a2b20c1b       kube-proxy-4jqng
	0f9d8868414e3       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   15 minutes ago      Running             kube-scheduler            2                   a888bbecc6e16       kube-scheduler-no-preload-118016
	ea380b4ed6c57       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   15 minutes ago      Running             kube-apiserver            2                   7cdbbde7b11a7       kube-apiserver-no-preload-118016
	da969ee0e5a26       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   15 minutes ago      Running             kube-controller-manager   2                   fbdb61c5a4c04       kube-controller-manager-no-preload-118016
	4bc100fbc7b93       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   f879716552436       etcd-no-preload-118016
	65b8d7537c15e       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   20 minutes ago      Exited              kube-apiserver            1                   153678d85c537       kube-apiserver-no-preload-118016
	
	
	==> coredns [12c65ae645171e407b2813910f0e54146ab1831f0f9130a3a26eec8eaa4ca14f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [28455521ad2095446bf081c4170d00e6ccec27f14f667f77e7b788fc2c51c6d2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-118016
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-118016
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=no-preload-118016
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T00_20_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:20:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-118016
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:35:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:35:47 +0000   Sun, 04 Aug 2024 00:20:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:35:47 +0000   Sun, 04 Aug 2024 00:20:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:35:47 +0000   Sun, 04 Aug 2024 00:20:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:35:47 +0000   Sun, 04 Aug 2024 00:20:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.137
	  Hostname:    no-preload-118016
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 929c75e006db4c36bd710fce742d71c7
	  System UUID:                929c75e0-06db-4c36-bd71-0fce742d71c7
	  Boot ID:                    dfbd9c45-cd25-4f16-b177-f333581a83d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-gg97s                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-6f6b679f8f-lj494                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-no-preload-118016                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-118016             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-118016    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-4jqng                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-no-preload-118016             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-6867b74b74-9gw27              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node no-preload-118016 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node no-preload-118016 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node no-preload-118016 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node no-preload-118016 event: Registered Node no-preload-118016 in Controller
	
	
	==> dmesg <==
	[  +0.043490] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.918981] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.533088] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.556998] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 4 00:15] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.061011] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068639] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.179678] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.153517] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.605464] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[ +16.571741] systemd-fstab-generator[1249]: Ignoring "noauto" option for root device
	[  +0.062683] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.971582] systemd-fstab-generator[1371]: Ignoring "noauto" option for root device
	[  +3.314167] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.530260] kauditd_printk_skb: 53 callbacks suppressed
	[  +9.865756] kauditd_printk_skb: 30 callbacks suppressed
	[Aug 4 00:20] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.699624] systemd-fstab-generator[3021]: Ignoring "noauto" option for root device
	[  +6.068622] systemd-fstab-generator[3342]: Ignoring "noauto" option for root device
	[  +0.114342] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.289085] systemd-fstab-generator[3462]: Ignoring "noauto" option for root device
	[  +0.111627] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.674193] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [4bc100fbc7b932603c43e7d427ec410b9665bb63a0ae251fe56cc1d0233bf678] <==
	{"level":"info","ts":"2024-08-04T00:20:15.070313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:20:15.070833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:20:15.071012Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:20:15.071041Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:20:15.071643Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T00:20:15.079442Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.137:2379"}
	{"level":"info","ts":"2024-08-04T00:20:15.082407Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T00:20:15.086998Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:20:15.087241Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c81a097889804662","local-member-id":"cd68190d43a88764","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:20:15.090964Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:20:15.103001Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:30:15.351250Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":689}
	{"level":"info","ts":"2024-08-04T00:30:15.361582Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":689,"took":"9.460988ms","hash":1815523401,"current-db-size-bytes":2306048,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2306048,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-08-04T00:30:15.361653Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1815523401,"revision":689,"compact-revision":-1}
	{"level":"warn","ts":"2024-08-04T00:34:24.675487Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.555935ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9756082236903236782 > lease_revoke:<id:0764911ac21da44c>","response":"size:29"}
	{"level":"warn","ts":"2024-08-04T00:34:25.033549Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.63128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T00:34:25.033691Z","caller":"traceutil/trace.go:171","msg":"trace[136348145] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1134; }","duration":"241.8018ms","start":"2024-08-04T00:34:24.791859Z","end":"2024-08-04T00:34:25.033661Z","steps":["trace[136348145] 'range keys from in-memory index tree'  (duration: 241.537951ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:35:15.359631Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":931}
	{"level":"info","ts":"2024-08-04T00:35:15.364687Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":931,"took":"4.670413ms","hash":443287722,"current-db-size-bytes":2306048,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-04T00:35:15.364792Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":443287722,"revision":931,"compact-revision":689}
	{"level":"info","ts":"2024-08-04T00:35:22.581245Z","caller":"traceutil/trace.go:171","msg":"trace[514075818] linearizableReadLoop","detail":"{readStateIndex:1377; appliedIndex:1376; }","duration":"173.71366ms","start":"2024-08-04T00:35:22.407501Z","end":"2024-08-04T00:35:22.581214Z","steps":["trace[514075818] 'read index received'  (duration: 173.462756ms)","trace[514075818] 'applied index is now lower than readState.Index'  (duration: 250.174µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-04T00:35:22.581464Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.950277ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T00:35:22.581538Z","caller":"traceutil/trace.go:171","msg":"trace[1409274771] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1181; }","duration":"174.037176ms","start":"2024-08-04T00:35:22.407484Z","end":"2024-08-04T00:35:22.581521Z","steps":["trace[1409274771] 'agreement among raft nodes before linearized reading'  (duration: 173.930245ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:35:22.581531Z","caller":"traceutil/trace.go:171","msg":"trace[1208515264] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"710.048413ms","start":"2024-08-04T00:35:21.871456Z","end":"2024-08-04T00:35:22.581504Z","steps":["trace[1208515264] 'process raft request'  (duration: 709.590476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:35:22.583182Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T00:35:21.871421Z","time spent":"710.930651ms","remote":"127.0.0.1:55516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1180 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 00:35:48 up 20 min,  0 users,  load average: 0.01, 0.09, 0.13
	Linux no-preload-118016 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [65b8d7537c15e948e9393b9c0ae8eec10ec494dce9c1d9a41a5e3a904c7c0d8f] <==
	W0804 00:20:06.537155       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.552199       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.615266       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.634256       1 logging.go:55] [core] [Channel #19 SubChannel #20]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.652094       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.666215       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.696822       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.713599       1 logging.go:55] [core] [Channel #43 SubChannel #44]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.741499       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.751410       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.753027       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.819459       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.833380       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:06.911984       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:07.000837       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:07.064590       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:10.441908       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:10.687048       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:10.795681       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:11.047547       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:11.069102       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:11.069193       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:11.282375       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:11.337518       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:20:11.386883       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ea380b4ed6c57516e60c5a8b184a4c25ab0858bcb97c92879ae7acc4bbb3a438] <==
	I0804 00:31:18.281589       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0804 00:31:18.281666       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:33:18.282026       1 handler_proxy.go:99] no RequestInfo found in the context
	E0804 00:33:18.282320       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0804 00:33:18.282431       1 handler_proxy.go:99] no RequestInfo found in the context
	E0804 00:33:18.282504       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0804 00:33:18.283679       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0804 00:33:18.283913       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0804 00:35:17.282601       1 handler_proxy.go:99] no RequestInfo found in the context
	E0804 00:35:17.283225       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0804 00:35:18.284808       1 handler_proxy.go:99] no RequestInfo found in the context
	W0804 00:35:18.284909       1 handler_proxy.go:99] no RequestInfo found in the context
	E0804 00:35:18.285015       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0804 00:35:18.284928       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0804 00:35:18.286314       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0804 00:35:18.286344       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [da969ee0e5a26bedb12514b48871ff47a91af2aacfe243d16021051a9fb1ae8a] <==
	I0804 00:30:24.785025       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:30:41.613498       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-118016"
	E0804 00:30:54.319996       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:30:54.793431       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:31:17.788137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="111.258µs"
	E0804 00:31:24.328416       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:31:24.802088       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:31:31.785357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="109.374µs"
	E0804 00:31:54.334837       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:31:54.811666       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:32:24.342016       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:32:24.820175       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:32:54.349887       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:32:54.827983       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:33:24.357626       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:33:24.834814       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:33:54.364852       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:33:54.848896       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:34:24.375483       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:34:24.859044       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:34:54.382379       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:34:54.869795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0804 00:35:24.389825       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0804 00:35:24.877410       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0804 00:35:47.591455       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-118016"
	
	
	==> kube-proxy [91f25ada05becf94b2699f197afd4b70de6b3211b94217bca2f9b51c476e439b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0804 00:20:26.056858       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0804 00:20:26.180473       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.137"]
	E0804 00:20:26.184657       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0804 00:20:26.695417       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0804 00:20:26.695481       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:20:26.695511       1 server_linux.go:169] "Using iptables Proxier"
	I0804 00:20:26.700424       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0804 00:20:26.700647       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0804 00:20:26.700674       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:20:26.704112       1 config.go:197] "Starting service config controller"
	I0804 00:20:26.704155       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:20:26.704175       1 config.go:104] "Starting endpoint slice config controller"
	I0804 00:20:26.704178       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:20:26.705323       1 config.go:326] "Starting node config controller"
	I0804 00:20:26.705348       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:20:26.805014       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:20:26.805102       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:20:26.806026       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f9d8868414e3c960bb075bcd99e870e5828071367bb340a06e1dd084313253e] <==
	W0804 00:20:17.263856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 00:20:17.263931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.090208       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0804 00:20:18.090265       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0804 00:20:18.099179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0804 00:20:18.099269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.099180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0804 00:20:18.099372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.200583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0804 00:20:18.200631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.217759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0804 00:20:18.217849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.217877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0804 00:20:18.218098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.220133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0804 00:20:18.220200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.462573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 00:20:18.462707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.507200       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0804 00:20:18.507633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.537044       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0804 00:20:18.537925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:20:18.585560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 00:20:18.585689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0804 00:20:21.034793       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 04 00:34:40 no-preload-118016 kubelet[3348]: E0804 00:34:40.005262    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731680004799764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:34:40 no-preload-118016 kubelet[3348]: E0804 00:34:40.005290    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731680004799764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:34:40 no-preload-118016 kubelet[3348]: E0804 00:34:40.771555    3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9gw27" podUID="2f3cdf21-9e68-49b9-a6e0-927465738f23"
	Aug 04 00:34:50 no-preload-118016 kubelet[3348]: E0804 00:34:50.006960    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731690006556318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:34:50 no-preload-118016 kubelet[3348]: E0804 00:34:50.006988    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731690006556318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:34:54 no-preload-118016 kubelet[3348]: E0804 00:34:54.769918    3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9gw27" podUID="2f3cdf21-9e68-49b9-a6e0-927465738f23"
	Aug 04 00:35:00 no-preload-118016 kubelet[3348]: E0804 00:35:00.009938    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731700009313668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:35:00 no-preload-118016 kubelet[3348]: E0804 00:35:00.009999    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731700009313668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:35:06 no-preload-118016 kubelet[3348]: E0804 00:35:06.770674    3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9gw27" podUID="2f3cdf21-9e68-49b9-a6e0-927465738f23"
	Aug 04 00:35:10 no-preload-118016 kubelet[3348]: E0804 00:35:10.012344    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731710011990758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:35:10 no-preload-118016 kubelet[3348]: E0804 00:35:10.012404    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731710011990758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:35:17 no-preload-118016 kubelet[3348]: E0804 00:35:17.770599    3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9gw27" podUID="2f3cdf21-9e68-49b9-a6e0-927465738f23"
	Aug 04 00:35:19 no-preload-118016 kubelet[3348]: E0804 00:35:19.834370    3348 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:35:19 no-preload-118016 kubelet[3348]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:35:19 no-preload-118016 kubelet[3348]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:35:19 no-preload-118016 kubelet[3348]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:35:19 no-preload-118016 kubelet[3348]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:35:20 no-preload-118016 kubelet[3348]: E0804 00:35:20.013960    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731720013608147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:35:20 no-preload-118016 kubelet[3348]: E0804 00:35:20.013986    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731720013608147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:35:29 no-preload-118016 kubelet[3348]: E0804 00:35:29.770959    3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9gw27" podUID="2f3cdf21-9e68-49b9-a6e0-927465738f23"
	Aug 04 00:35:30 no-preload-118016 kubelet[3348]: E0804 00:35:30.016364    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731730015920466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:35:30 no-preload-118016 kubelet[3348]: E0804 00:35:30.016437    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731730015920466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:35:40 no-preload-118016 kubelet[3348]: E0804 00:35:40.018556    3348 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731740018039514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:35:40 no-preload-118016 kubelet[3348]: E0804 00:35:40.019339    3348 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731740018039514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100728,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:35:44 no-preload-118016 kubelet[3348]: E0804 00:35:44.770904    3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-9gw27" podUID="2f3cdf21-9e68-49b9-a6e0-927465738f23"
	
	
	==> storage-provisioner [5f59b89753e4a597652b4a53f13a681bada0e2629949903713f9344c0c937af6] <==
	I0804 00:20:27.010470       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0804 00:20:27.042522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0804 00:20:27.042614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0804 00:20:27.056619       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0804 00:20:27.063023       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-118016_ba3a8652-7281-4978-b562-91d934499239!
	I0804 00:20:27.057226       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e720b089-28e8-4857-ac6f-14ff33c60ece", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-118016_ba3a8652-7281-4978-b562-91d934499239 became leader
	I0804 00:20:27.164254       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-118016_ba3a8652-7281-4978-b562-91d934499239!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-118016 -n no-preload-118016
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-118016 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-9gw27
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-118016 describe pod metrics-server-6867b74b74-9gw27
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-118016 describe pod metrics-server-6867b74b74-9gw27: exit status 1 (63.111899ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-9gw27" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-118016 describe pod metrics-server-6867b74b74-9gw27: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (370.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (102.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
E0804 00:33:27.616294   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.154:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.154:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-576210 -n old-k8s-version-576210
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 2 (217.792713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-576210" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-576210 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-576210 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.156µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-576210 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 2 (219.079763ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-576210 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-576210 logs -n 25: (1.659592239s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-551054                                 | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-302198                           | kubernetes-upgrade-302198    | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-551054 sudo                            | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-551054                                 | NoKubernetes-551054          | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:05 UTC |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:05 UTC | 04 Aug 24 00:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-877598            | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-705918                              | cert-expiration-705918       | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-423330 | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:07 UTC |
	|         | disable-driver-mounts-423330                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:07 UTC | 04 Aug 24 00:09 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-118016             | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC | 04 Aug 24 00:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-576210        | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:08 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-969068  | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-877598                 | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-877598                                  | embed-certs-877598           | jenkins | v1.33.1 | 04 Aug 24 00:09 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-576210             | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC | 04 Aug 24 00:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-576210                              | old-k8s-version-576210       | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-118016                  | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:10 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-118016                                   | no-preload-118016            | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-969068       | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-969068 | jenkins | v1.33.1 | 04 Aug 24 00:11 UTC | 04 Aug 24 00:20 UTC |
	|         | default-k8s-diff-port-969068                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:11:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:11:52.361065   65441 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:11:52.361334   65441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:11:52.361345   65441 out.go:304] Setting ErrFile to fd 2...
	I0804 00:11:52.361349   65441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:11:52.361548   65441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:11:52.362087   65441 out.go:298] Setting JSON to false
	I0804 00:11:52.363002   65441 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6856,"bootTime":1722723456,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:11:52.363061   65441 start.go:139] virtualization: kvm guest
	I0804 00:11:52.365345   65441 out.go:177] * [default-k8s-diff-port-969068] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:11:52.367170   65441 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:11:52.367161   65441 notify.go:220] Checking for updates...
	I0804 00:11:52.369837   65441 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:11:52.371134   65441 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:11:52.372226   65441 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:11:52.373445   65441 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:11:52.374802   65441 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:11:52.376375   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:11:52.376787   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:11:52.376859   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:11:52.392495   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0804 00:11:52.392954   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:11:52.393477   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:11:52.393497   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:11:52.393883   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:11:52.394048   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:11:52.394313   65441 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:11:52.394606   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:11:52.394638   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:11:52.409194   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0804 00:11:52.409594   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:11:52.410032   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:11:52.410050   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:11:52.410358   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:11:52.410529   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:11:52.445480   65441 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:11:52.446679   65441 start.go:297] selected driver: kvm2
	I0804 00:11:52.446694   65441 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:11:52.446827   65441 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:11:52.447792   65441 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:11:52.447886   65441 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:11:52.462893   65441 install.go:137] /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:11:52.463275   65441 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:11:52.463306   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:11:52.463316   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:11:52.463368   65441 start.go:340] cluster config:
	{Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:11:52.463486   65441 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:11:52.465374   65441 out.go:177] * Starting "default-k8s-diff-port-969068" primary control-plane node in "default-k8s-diff-port-969068" cluster
	I0804 00:11:52.466656   65441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:11:52.466698   65441 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:11:52.466710   65441 cache.go:56] Caching tarball of preloaded images
	I0804 00:11:52.466790   65441 preload.go:172] Found /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:11:52.466801   65441 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:11:52.466901   65441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:11:52.467100   65441 start.go:360] acquireMachinesLock for default-k8s-diff-port-969068: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:11:55.809602   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:11:58.881666   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:04.961665   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:08.033617   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:14.113634   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:17.185623   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:23.265618   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:26.337594   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:32.417583   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:35.489705   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:41.569654   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:44.641653   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:50.721640   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:53.793649   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:12:59.873643   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:02.945676   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:09.025652   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:12.097647   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:18.177740   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:21.249606   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:27.329637   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:30.401648   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:36.481588   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:39.553638   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:45.633633   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:48.705646   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:54.785636   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:13:57.857662   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:03.937643   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:07.009557   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:13.089694   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:16.161619   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:22.241650   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:25.313612   64502 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.140:22: connect: no route to host
	I0804 00:14:28.318586   64758 start.go:364] duration metric: took 4m16.324186239s to acquireMachinesLock for "old-k8s-version-576210"
	I0804 00:14:28.318635   64758 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:28.318646   64758 fix.go:54] fixHost starting: 
	I0804 00:14:28.319092   64758 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:28.319128   64758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:28.334850   64758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I0804 00:14:28.335321   64758 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:28.335817   64758 main.go:141] libmachine: Using API Version  1
	I0804 00:14:28.335848   64758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:28.336204   64758 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:28.336435   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:28.336622   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetState
	I0804 00:14:28.338146   64758 fix.go:112] recreateIfNeeded on old-k8s-version-576210: state=Stopped err=<nil>
	I0804 00:14:28.338166   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	W0804 00:14:28.338322   64758 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:28.340640   64758 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-576210" ...
	I0804 00:14:28.315605   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:28.315642   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:14:28.316035   64502 buildroot.go:166] provisioning hostname "embed-certs-877598"
	I0804 00:14:28.316073   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:14:28.316325   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:14:28.318440   64502 machine.go:97] duration metric: took 4m37.42620041s to provisionDockerMachine
	I0804 00:14:28.318477   64502 fix.go:56] duration metric: took 4m37.448052873s for fixHost
	I0804 00:14:28.318485   64502 start.go:83] releasing machines lock for "embed-certs-877598", held for 4m37.44807127s
	W0804 00:14:28.318509   64502 start.go:714] error starting host: provision: host is not running
	W0804 00:14:28.318594   64502 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0804 00:14:28.318606   64502 start.go:729] Will try again in 5 seconds ...
	I0804 00:14:28.342217   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .Start
	I0804 00:14:28.342401   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring networks are active...
	I0804 00:14:28.343274   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network default is active
	I0804 00:14:28.343761   64758 main.go:141] libmachine: (old-k8s-version-576210) Ensuring network mk-old-k8s-version-576210 is active
	I0804 00:14:28.344268   64758 main.go:141] libmachine: (old-k8s-version-576210) Getting domain xml...
	I0804 00:14:28.345080   64758 main.go:141] libmachine: (old-k8s-version-576210) Creating domain...
	I0804 00:14:29.575420   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting to get IP...
	I0804 00:14:29.576307   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.576754   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.576842   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.576711   66003 retry.go:31] will retry after 272.821874ms: waiting for machine to come up
	I0804 00:14:29.851363   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:29.851951   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:29.851976   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:29.851895   66003 retry.go:31] will retry after 247.116514ms: waiting for machine to come up
	I0804 00:14:30.100479   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.100883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.100916   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.100833   66003 retry.go:31] will retry after 353.251065ms: waiting for machine to come up
	I0804 00:14:30.455526   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:30.455975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:30.456004   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:30.455933   66003 retry.go:31] will retry after 558.071575ms: waiting for machine to come up
	I0804 00:14:31.015539   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.015974   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.016000   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.015917   66003 retry.go:31] will retry after 514.757536ms: waiting for machine to come up
	I0804 00:14:31.532799   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:31.533232   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:31.533250   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:31.533186   66003 retry.go:31] will retry after 607.548546ms: waiting for machine to come up
	I0804 00:14:33.318807   64502 start.go:360] acquireMachinesLock for embed-certs-877598: {Name:mkd8753f972f5e4e51f77502e5c8c1796bb2d0ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:14:32.142162   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:32.142658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:32.142693   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:32.142610   66003 retry.go:31] will retry after 897.977595ms: waiting for machine to come up
	I0804 00:14:33.042628   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:33.043002   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:33.043028   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:33.042966   66003 retry.go:31] will retry after 1.094117762s: waiting for machine to come up
	I0804 00:14:34.138946   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:34.139459   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:34.139485   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:34.139414   66003 retry.go:31] will retry after 1.435055372s: waiting for machine to come up
	I0804 00:14:35.576253   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:35.576603   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:35.576625   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:35.576547   66003 retry.go:31] will retry after 1.688006591s: waiting for machine to come up
	I0804 00:14:37.265928   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:37.266429   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:37.266456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:37.266371   66003 retry.go:31] will retry after 2.356818801s: waiting for machine to come up
	I0804 00:14:39.624408   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:39.624832   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:39.624863   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:39.624775   66003 retry.go:31] will retry after 2.41856098s: waiting for machine to come up
	I0804 00:14:46.442402   65087 start.go:364] duration metric: took 3m44.405576801s to acquireMachinesLock for "no-preload-118016"
	I0804 00:14:46.442459   65087 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:14:46.442469   65087 fix.go:54] fixHost starting: 
	I0804 00:14:46.442938   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:14:46.442975   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:14:46.459944   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0804 00:14:46.460375   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:14:46.460851   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:14:46.460871   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:14:46.461211   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:14:46.461402   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:14:46.461538   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:14:46.463097   65087 fix.go:112] recreateIfNeeded on no-preload-118016: state=Stopped err=<nil>
	I0804 00:14:46.463126   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	W0804 00:14:46.463282   65087 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:14:46.465711   65087 out.go:177] * Restarting existing kvm2 VM for "no-preload-118016" ...
	I0804 00:14:42.044498   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:42.044855   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | unable to find current IP address of domain old-k8s-version-576210 in network mk-old-k8s-version-576210
	I0804 00:14:42.044882   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | I0804 00:14:42.044822   66003 retry.go:31] will retry after 3.111190148s: waiting for machine to come up
	I0804 00:14:45.158161   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.158688   64758 main.go:141] libmachine: (old-k8s-version-576210) Found IP for machine: 192.168.72.154
	I0804 00:14:45.158709   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserving static IP address...
	I0804 00:14:45.158719   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has current primary IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.159112   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.159138   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | skip adding static IP to network mk-old-k8s-version-576210 - found existing host DHCP lease matching {name: "old-k8s-version-576210", mac: "52:54:00:cc:b7:b1", ip: "192.168.72.154"}
	I0804 00:14:45.159151   64758 main.go:141] libmachine: (old-k8s-version-576210) Reserved static IP address: 192.168.72.154
	I0804 00:14:45.159163   64758 main.go:141] libmachine: (old-k8s-version-576210) Waiting for SSH to be available...
	I0804 00:14:45.159172   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Getting to WaitForSSH function...
	I0804 00:14:45.161469   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161782   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.161812   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.161936   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH client type: external
	I0804 00:14:45.161975   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa (-rw-------)
	I0804 00:14:45.162015   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:14:45.162034   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | About to run SSH command:
	I0804 00:14:45.162044   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | exit 0
	I0804 00:14:45.281546   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | SSH cmd err, output: <nil>: 
	I0804 00:14:45.281859   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetConfigRaw
	I0804 00:14:45.282574   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.284998   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285386   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.285414   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.285614   64758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/config.json ...
	I0804 00:14:45.285806   64758 machine.go:94] provisionDockerMachine start ...
	I0804 00:14:45.285823   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:45.286098   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.288285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288640   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.288668   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.288753   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.288931   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289088   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.289253   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.289426   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.289628   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.289640   64758 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:14:45.386001   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:14:45.386036   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386325   64758 buildroot.go:166] provisioning hostname "old-k8s-version-576210"
	I0804 00:14:45.386348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.386536   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.389316   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389718   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.389739   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.389948   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.390122   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390285   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.390415   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.390557   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.390758   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.390776   64758 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-576210 && echo "old-k8s-version-576210" | sudo tee /etc/hostname
	I0804 00:14:45.499644   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-576210
	
	I0804 00:14:45.499695   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.502583   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.502935   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.502959   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.503123   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.503318   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503456   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.503570   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.503729   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.503898   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.503915   64758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-576210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-576210/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-576210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:14:45.606971   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:14:45.607003   64758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:14:45.607045   64758 buildroot.go:174] setting up certificates
	I0804 00:14:45.607053   64758 provision.go:84] configureAuth start
	I0804 00:14:45.607062   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetMachineName
	I0804 00:14:45.607327   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:45.610009   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610378   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.610407   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.610545   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.612549   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.612876   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.612908   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.613071   64758 provision.go:143] copyHostCerts
	I0804 00:14:45.613134   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:14:45.613147   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:14:45.613231   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:14:45.613343   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:14:45.613368   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:14:45.613410   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:14:45.613491   64758 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:14:45.613501   64758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:14:45.613535   64758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:14:45.613609   64758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-576210 san=[127.0.0.1 192.168.72.154 localhost minikube old-k8s-version-576210]
	I0804 00:14:45.794221   64758 provision.go:177] copyRemoteCerts
	I0804 00:14:45.794276   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:14:45.794299   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.796859   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797182   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.797225   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.797348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.797555   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.797687   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.797804   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:45.875704   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:14:45.903765   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0804 00:14:45.930101   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:14:45.955639   64758 provision.go:87] duration metric: took 348.556108ms to configureAuth
	I0804 00:14:45.955668   64758 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:14:45.955874   64758 config.go:182] Loaded profile config "old-k8s-version-576210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:14:45.955960   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:45.958487   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958835   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:45.958950   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:45.958970   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:45.959193   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959348   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:45.959472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:45.959616   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:45.959789   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:45.959810   64758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:14:46.217683   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:14:46.217725   64758 machine.go:97] duration metric: took 931.901933ms to provisionDockerMachine
	I0804 00:14:46.217742   64758 start.go:293] postStartSetup for "old-k8s-version-576210" (driver="kvm2")
	I0804 00:14:46.217758   64758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:14:46.217787   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.218127   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:14:46.218151   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.220834   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221148   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.221170   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.221342   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.221576   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.221733   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.221867   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.300102   64758 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:14:46.304434   64758 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:14:46.304464   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:14:46.304538   64758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:14:46.304631   64758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:14:46.304747   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:14:46.314378   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:46.339057   64758 start.go:296] duration metric: took 121.299069ms for postStartSetup
	I0804 00:14:46.339105   64758 fix.go:56] duration metric: took 18.020458894s for fixHost
	I0804 00:14:46.339129   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.341883   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342258   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.342285   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.342472   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.342688   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342856   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.342992   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.343161   64758 main.go:141] libmachine: Using SSH client type: native
	I0804 00:14:46.343385   64758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.154 22 <nil> <nil>}
	I0804 00:14:46.343400   64758 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:14:46.442247   64758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730486.414818212
	
	I0804 00:14:46.442275   64758 fix.go:216] guest clock: 1722730486.414818212
	I0804 00:14:46.442288   64758 fix.go:229] Guest: 2024-08-04 00:14:46.414818212 +0000 UTC Remote: 2024-08-04 00:14:46.339109981 +0000 UTC m=+274.490542023 (delta=75.708231ms)
	I0804 00:14:46.442313   64758 fix.go:200] guest clock delta is within tolerance: 75.708231ms
	I0804 00:14:46.442319   64758 start.go:83] releasing machines lock for "old-k8s-version-576210", held for 18.123699316s
	I0804 00:14:46.442347   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.442656   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:46.445456   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.445865   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.445892   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.446069   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446577   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446743   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .DriverName
	I0804 00:14:46.446816   64758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:14:46.446850   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.446965   64758 ssh_runner.go:195] Run: cat /version.json
	I0804 00:14:46.446987   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHHostname
	I0804 00:14:46.449576   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449794   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.449953   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.449983   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450178   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450265   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:46.450317   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:46.450384   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450520   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHPort
	I0804 00:14:46.450605   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450667   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHKeyPath
	I0804 00:14:46.450733   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.450780   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetSSHUsername
	I0804 00:14:46.450910   64758 sshutil.go:53] new ssh client: &{IP:192.168.72.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/old-k8s-version-576210/id_rsa Username:docker}
	I0804 00:14:46.534686   64758 ssh_runner.go:195] Run: systemctl --version
	I0804 00:14:46.554270   64758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:14:46.708220   64758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:14:46.714541   64758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:14:46.714607   64758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:14:46.731642   64758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:14:46.731668   64758 start.go:495] detecting cgroup driver to use...
	I0804 00:14:46.731739   64758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:14:46.748782   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:14:46.763556   64758 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:14:46.763640   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:14:46.778075   64758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:14:46.793133   64758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:14:46.466927   65087 main.go:141] libmachine: (no-preload-118016) Calling .Start
	I0804 00:14:46.467081   65087 main.go:141] libmachine: (no-preload-118016) Ensuring networks are active...
	I0804 00:14:46.467696   65087 main.go:141] libmachine: (no-preload-118016) Ensuring network default is active
	I0804 00:14:46.468023   65087 main.go:141] libmachine: (no-preload-118016) Ensuring network mk-no-preload-118016 is active
	I0804 00:14:46.468344   65087 main.go:141] libmachine: (no-preload-118016) Getting domain xml...
	I0804 00:14:46.468932   65087 main.go:141] libmachine: (no-preload-118016) Creating domain...
	I0804 00:14:46.918377   64758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:14:47.059683   64758 docker.go:233] disabling docker service ...
	I0804 00:14:47.059753   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:14:47.074819   64758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:14:47.092184   64758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:14:47.235274   64758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:14:47.357937   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:14:47.375273   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:14:47.395182   64758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0804 00:14:47.395236   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.407036   64758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:14:47.407092   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.418562   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.434481   64758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:14:47.447488   64758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:14:47.460242   64758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:14:47.471089   64758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:14:47.471143   64758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:14:47.486698   64758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:14:47.498754   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:47.630867   64758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:14:47.796598   64758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:14:47.796690   64758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:14:47.802302   64758 start.go:563] Will wait 60s for crictl version
	I0804 00:14:47.802364   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:47.806368   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:14:47.847588   64758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:14:47.847679   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.877936   64758 ssh_runner.go:195] Run: crio --version
	I0804 00:14:47.908229   64758 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0804 00:14:47.909635   64758 main.go:141] libmachine: (old-k8s-version-576210) Calling .GetIP
	I0804 00:14:47.912658   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913102   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b7:b1", ip: ""} in network mk-old-k8s-version-576210: {Iface:virbr4 ExpiryTime:2024-08-04 01:14:39 +0000 UTC Type:0 Mac:52:54:00:cc:b7:b1 Iaid: IPaddr:192.168.72.154 Prefix:24 Hostname:old-k8s-version-576210 Clientid:01:52:54:00:cc:b7:b1}
	I0804 00:14:47.913130   64758 main.go:141] libmachine: (old-k8s-version-576210) DBG | domain old-k8s-version-576210 has defined IP address 192.168.72.154 and MAC address 52:54:00:cc:b7:b1 in network mk-old-k8s-version-576210
	I0804 00:14:47.913438   64758 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0804 00:14:47.917910   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:47.931201   64758 kubeadm.go:883] updating cluster {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:14:47.931318   64758 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:14:47.931381   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:47.980001   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:47.980071   64758 ssh_runner.go:195] Run: which lz4
	I0804 00:14:47.984277   64758 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:14:47.988781   64758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:14:47.988810   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0804 00:14:49.706968   64758 crio.go:462] duration metric: took 1.722721175s to copy over tarball
	I0804 00:14:49.707059   64758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:14:47.715321   65087 main.go:141] libmachine: (no-preload-118016) Waiting to get IP...
	I0804 00:14:47.716397   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:47.716853   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:47.716889   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:47.716820   66120 retry.go:31] will retry after 187.841432ms: waiting for machine to come up
	I0804 00:14:47.906481   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:47.906984   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:47.907018   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:47.906942   66120 retry.go:31] will retry after 389.569097ms: waiting for machine to come up
	I0804 00:14:48.298691   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:48.299997   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:48.300021   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:48.299947   66120 retry.go:31] will retry after 382.905254ms: waiting for machine to come up
	I0804 00:14:48.684628   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:48.685095   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:48.685127   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:48.685066   66120 retry.go:31] will retry after 526.267085ms: waiting for machine to come up
	I0804 00:14:49.213459   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:49.214180   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:49.214203   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:49.214142   66120 retry.go:31] will retry after 666.253139ms: waiting for machine to come up
	I0804 00:14:49.882141   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:49.882610   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:49.882639   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:49.882560   66120 retry.go:31] will retry after 776.560525ms: waiting for machine to come up
	I0804 00:14:50.660679   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:50.661149   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:50.661177   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:50.661105   66120 retry.go:31] will retry after 825.927722ms: waiting for machine to come up
	I0804 00:14:51.488562   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:51.488937   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:51.488964   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:51.488894   66120 retry.go:31] will retry after 1.210535859s: waiting for machine to come up
	I0804 00:14:52.511242   64758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.804147671s)
	I0804 00:14:52.511275   64758 crio.go:469] duration metric: took 2.804279705s to extract the tarball
	I0804 00:14:52.511285   64758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:14:52.553905   64758 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:14:52.587405   64758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:14:52.587429   64758 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:14:52.587496   64758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.587513   64758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.587550   64758 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.587551   64758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.587554   64758 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.587567   64758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.587570   64758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.587577   64758 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.589240   64758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:52.589239   64758 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.589247   64758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.589211   64758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:52.589206   64758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.589287   64758 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0804 00:14:52.589579   64758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.742969   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.766505   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.782813   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0804 00:14:52.788509   64758 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0804 00:14:52.788553   64758 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.788598   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.823108   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.829531   64758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0804 00:14:52.829577   64758 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.829648   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.858209   64758 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0804 00:14:52.858238   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0804 00:14:52.858245   64758 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0804 00:14:52.858288   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.888665   64758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0804 00:14:52.888717   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:14:52.888748   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0804 00:14:52.888717   64758 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.888794   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:52.918127   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:52.921386   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0804 00:14:52.929839   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:52.977866   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0804 00:14:52.977919   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:14:52.977960   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0804 00:14:52.994379   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.003198   64758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0804 00:14:53.003233   64758 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.003273   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.056310   64758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0804 00:14:53.056338   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0804 00:14:53.056357   64758 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.056403   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.062077   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:14:53.062119   64758 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0804 00:14:53.062161   64758 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.062206   64758 ssh_runner.go:195] Run: which crictl
	I0804 00:14:53.064260   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:14:53.114709   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0804 00:14:53.114758   64758 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0804 00:14:53.118375   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0804 00:14:53.147635   64758 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0804 00:14:53.497155   64758 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:14:53.647242   64758 cache_images.go:92] duration metric: took 1.059794593s to LoadCachedImages
	W0804 00:14:53.647353   64758 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0804 00:14:53.647370   64758 kubeadm.go:934] updating node { 192.168.72.154 8443 v1.20.0 crio true true} ...
	I0804 00:14:53.647507   64758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-576210 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:14:53.647586   64758 ssh_runner.go:195] Run: crio config
	I0804 00:14:53.710377   64758 cni.go:84] Creating CNI manager for ""
	I0804 00:14:53.710399   64758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:14:53.710411   64758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:14:53.710437   64758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.154 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-576210 NodeName:old-k8s-version-576210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0804 00:14:53.710583   64758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-576210"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:14:53.710661   64758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0804 00:14:53.721942   64758 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:14:53.722005   64758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:14:53.732623   64758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0804 00:14:53.749878   64758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:14:53.767147   64758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0804 00:14:53.785522   64758 ssh_runner.go:195] Run: grep 192.168.72.154	control-plane.minikube.internal$ /etc/hosts
	I0804 00:14:53.789438   64758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:14:53.802152   64758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:14:53.934508   64758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:14:53.952247   64758 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210 for IP: 192.168.72.154
	I0804 00:14:53.952280   64758 certs.go:194] generating shared ca certs ...
	I0804 00:14:53.952301   64758 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:53.952470   64758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:14:53.952523   64758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:14:53.952536   64758 certs.go:256] generating profile certs ...
	I0804 00:14:53.952658   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.key
	I0804 00:14:53.952730   64758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key.5357f842
	I0804 00:14:53.952783   64758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key
	I0804 00:14:53.952948   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:14:53.953000   64758 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:14:53.953013   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:14:53.953048   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:14:53.953084   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:14:53.953114   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:14:53.953191   64758 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:14:53.954013   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:14:54.001446   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:14:54.029628   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:14:54.062713   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:14:54.090711   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0804 00:14:54.117970   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:14:54.163691   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:14:54.190151   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:14:54.219334   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:14:54.244677   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:14:54.269795   64758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:14:54.294949   64758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:14:54.312330   64758 ssh_runner.go:195] Run: openssl version
	I0804 00:14:54.318320   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:14:54.328932   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333686   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.333737   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:14:54.341330   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:14:54.356008   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:14:54.368966   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373896   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.373954   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:14:54.379770   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:14:54.390903   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:14:54.402637   64758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407296   64758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.407362   64758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:14:54.413215   64758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:14:54.424473   64758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:14:54.429673   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:14:54.436038   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:14:54.442091   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:14:54.448507   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:14:54.455421   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:14:54.461969   64758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:14:54.468042   64758 kubeadm.go:392] StartCluster: {Name:old-k8s-version-576210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-576210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.154 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:14:54.468151   64758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:14:54.468208   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.508109   64758 cri.go:89] found id: ""
	I0804 00:14:54.508183   64758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:14:54.518712   64758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:14:54.518736   64758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:14:54.518788   64758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:14:54.528545   64758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:14:54.529780   64758 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-576210" does not appear in /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:14:54.530411   64758 kubeconfig.go:62] /home/jenkins/minikube-integration/19364-9607/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-576210" cluster setting kubeconfig missing "old-k8s-version-576210" context setting]
	I0804 00:14:54.531316   64758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:14:54.550431   64758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:14:54.561047   64758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.154
	I0804 00:14:54.561086   64758 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:14:54.561108   64758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:14:54.561163   64758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:14:54.597213   64758 cri.go:89] found id: ""
	I0804 00:14:54.597282   64758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:14:54.612914   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:14:54.622533   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:14:54.622562   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:14:54.622613   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:14:54.632746   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:14:54.632812   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:14:54.642197   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:14:54.651204   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:14:54.651268   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:14:54.660496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.669448   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:14:54.669512   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:14:54.678773   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:14:54.687854   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:14:54.687902   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:14:54.697066   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:14:54.707036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:54.840553   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.551919   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.790500   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.898210   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:14:55.995621   64758 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:14:55.995711   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:56.496072   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:52.701200   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:52.701574   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:52.701598   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:52.701547   66120 retry.go:31] will retry after 1.518623613s: waiting for machine to come up
	I0804 00:14:54.221367   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:54.221886   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:54.221916   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:54.221835   66120 retry.go:31] will retry after 1.869121058s: waiting for machine to come up
	I0804 00:14:56.092101   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:56.092527   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:56.092550   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:56.092488   66120 retry.go:31] will retry after 2.071227436s: waiting for machine to come up
	I0804 00:14:56.995965   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.496285   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:57.995805   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.496549   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.996224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.496360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:59.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:00.996056   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:01.496435   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:14:58.166383   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:14:58.166760   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:14:58.166807   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:14:58.166729   66120 retry.go:31] will retry after 2.352991709s: waiting for machine to come up
	I0804 00:15:00.522153   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:00.522630   65087 main.go:141] libmachine: (no-preload-118016) DBG | unable to find current IP address of domain no-preload-118016 in network mk-no-preload-118016
	I0804 00:15:00.522657   65087 main.go:141] libmachine: (no-preload-118016) DBG | I0804 00:15:00.522584   66120 retry.go:31] will retry after 3.326179831s: waiting for machine to come up
	I0804 00:15:05.170439   65441 start.go:364] duration metric: took 3m12.703297591s to acquireMachinesLock for "default-k8s-diff-port-969068"
	I0804 00:15:05.170512   65441 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:15:05.170520   65441 fix.go:54] fixHost starting: 
	I0804 00:15:05.170935   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:05.170974   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:05.188546   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0804 00:15:05.188997   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:05.189494   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:05.189518   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:05.189933   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:05.190132   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:05.190276   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:05.191653   65441 fix.go:112] recreateIfNeeded on default-k8s-diff-port-969068: state=Stopped err=<nil>
	I0804 00:15:05.191684   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	W0804 00:15:05.191834   65441 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:15:05.194275   65441 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-969068" ...
	I0804 00:15:01.996148   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.496756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:02.996430   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.496646   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.996707   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.496772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:04.995997   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.496651   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:05.996384   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.496403   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:03.850063   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.850518   65087 main.go:141] libmachine: (no-preload-118016) Found IP for machine: 192.168.61.137
	I0804 00:15:03.850544   65087 main.go:141] libmachine: (no-preload-118016) Reserving static IP address...
	I0804 00:15:03.850559   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has current primary IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.850970   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "no-preload-118016", mac: "52:54:00:be:41:20", ip: "192.168.61.137"} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.851001   65087 main.go:141] libmachine: (no-preload-118016) DBG | skip adding static IP to network mk-no-preload-118016 - found existing host DHCP lease matching {name: "no-preload-118016", mac: "52:54:00:be:41:20", ip: "192.168.61.137"}
	I0804 00:15:03.851015   65087 main.go:141] libmachine: (no-preload-118016) Reserved static IP address: 192.168.61.137
	I0804 00:15:03.851030   65087 main.go:141] libmachine: (no-preload-118016) Waiting for SSH to be available...
	I0804 00:15:03.851048   65087 main.go:141] libmachine: (no-preload-118016) DBG | Getting to WaitForSSH function...
	I0804 00:15:03.853316   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.853676   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.853705   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.853819   65087 main.go:141] libmachine: (no-preload-118016) DBG | Using SSH client type: external
	I0804 00:15:03.853850   65087 main.go:141] libmachine: (no-preload-118016) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa (-rw-------)
	I0804 00:15:03.853886   65087 main.go:141] libmachine: (no-preload-118016) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:03.853901   65087 main.go:141] libmachine: (no-preload-118016) DBG | About to run SSH command:
	I0804 00:15:03.853913   65087 main.go:141] libmachine: (no-preload-118016) DBG | exit 0
	I0804 00:15:03.981414   65087 main.go:141] libmachine: (no-preload-118016) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:03.981807   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetConfigRaw
	I0804 00:15:03.982419   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:03.985062   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.985400   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.985433   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.985674   65087 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/config.json ...
	I0804 00:15:03.985857   65087 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:03.985873   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:03.986090   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:03.988490   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.988798   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:03.988826   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:03.989017   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:03.989183   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:03.989342   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:03.989510   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:03.989697   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:03.989916   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:03.989927   65087 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:04.106042   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:04.106090   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.106372   65087 buildroot.go:166] provisioning hostname "no-preload-118016"
	I0804 00:15:04.106398   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.106594   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.109434   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.109777   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.109803   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.109919   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.110092   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.110248   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.110423   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.110582   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.110749   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.110764   65087 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-118016 && echo "no-preload-118016" | sudo tee /etc/hostname
	I0804 00:15:04.239856   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-118016
	
	I0804 00:15:04.239884   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.242877   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.243241   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.243271   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.243486   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.243712   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.243897   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.244046   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.244232   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.244420   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.244443   65087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-118016' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-118016/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-118016' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:04.367259   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:04.367289   65087 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:04.367330   65087 buildroot.go:174] setting up certificates
	I0804 00:15:04.367340   65087 provision.go:84] configureAuth start
	I0804 00:15:04.367432   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetMachineName
	I0804 00:15:04.367848   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:04.370330   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.370630   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.370658   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.370744   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.372799   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.373175   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.373203   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.373308   65087 provision.go:143] copyHostCerts
	I0804 00:15:04.373386   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:04.373399   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:04.373458   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:04.373557   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:04.373565   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:04.373585   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:04.373651   65087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:04.373657   65087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:04.373675   65087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:04.373732   65087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.no-preload-118016 san=[127.0.0.1 192.168.61.137 localhost minikube no-preload-118016]
	I0804 00:15:04.467261   65087 provision.go:177] copyRemoteCerts
	I0804 00:15:04.467322   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:04.467347   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.469843   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.470126   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.470154   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.470297   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.470478   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.470644   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.470761   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:04.559980   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:04.585701   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 00:15:04.610270   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:15:04.633954   65087 provision.go:87] duration metric: took 266.53536ms to configureAuth
	I0804 00:15:04.633981   65087 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:04.634154   65087 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:15:04.634219   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.636880   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.637243   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.637271   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.637452   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.637664   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.637823   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.637921   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.638060   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:04.638234   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:04.638250   65087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:04.916045   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:04.916077   65087 machine.go:97] duration metric: took 930.20802ms to provisionDockerMachine
	I0804 00:15:04.916088   65087 start.go:293] postStartSetup for "no-preload-118016" (driver="kvm2")
	I0804 00:15:04.916100   65087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:04.916113   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:04.916429   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:04.916453   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:04.919155   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.919485   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:04.919514   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:04.919657   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:04.919859   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:04.920026   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:04.920166   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.012754   65087 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:05.017004   65087 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:05.017024   65087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:05.017091   65087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:05.017180   65087 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:05.017293   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:05.026980   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:05.051265   65087 start.go:296] duration metric: took 135.164451ms for postStartSetup
	I0804 00:15:05.051309   65087 fix.go:56] duration metric: took 18.608839754s for fixHost
	I0804 00:15:05.051331   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.054286   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.054683   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.054710   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.054876   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.055127   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.055321   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.055485   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.055668   65087 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:05.055870   65087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.137 22 <nil> <nil>}
	I0804 00:15:05.055882   65087 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:05.170285   65087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730505.141206116
	
	I0804 00:15:05.170314   65087 fix.go:216] guest clock: 1722730505.141206116
	I0804 00:15:05.170321   65087 fix.go:229] Guest: 2024-08-04 00:15:05.141206116 +0000 UTC Remote: 2024-08-04 00:15:05.051313292 +0000 UTC m=+243.154971169 (delta=89.892824ms)
	I0804 00:15:05.170341   65087 fix.go:200] guest clock delta is within tolerance: 89.892824ms
	I0804 00:15:05.170359   65087 start.go:83] releasing machines lock for "no-preload-118016", held for 18.727925423s
	I0804 00:15:05.170392   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.170673   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:05.173694   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.174084   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.174117   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.174265   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.174828   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.175015   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:15:05.175103   65087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:05.175145   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.175263   65087 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:05.175286   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:15:05.177906   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178280   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.178307   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178329   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178470   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.178688   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.178777   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:05.178832   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:05.178854   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.178945   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:15:05.179025   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.179111   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:15:05.179265   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:15:05.179417   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:15:05.282397   65087 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:05.288682   65087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:05.434388   65087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:05.440857   65087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:05.440937   65087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:05.461853   65087 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:05.461879   65087 start.go:495] detecting cgroup driver to use...
	I0804 00:15:05.461944   65087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:05.478397   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:05.494093   65087 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:05.494151   65087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:05.509391   65087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:05.524127   65087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:05.640185   65087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:05.784994   65087 docker.go:233] disabling docker service ...
	I0804 00:15:05.785071   65087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:05.802802   65087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:05.818424   65087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:05.970147   65087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:06.099759   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:06.114434   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:06.132989   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:06.433914   65087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0804 00:15:06.433969   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.452155   65087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:06.452245   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.464730   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.475848   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.488341   65087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:06.501984   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.514776   65087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.534773   65087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:06.547076   65087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:06.558639   65087 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:06.558695   65087 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:06.572920   65087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:06.583298   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:06.705307   65087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:06.845776   65087 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:06.845840   65087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:06.851710   65087 start.go:563] Will wait 60s for crictl version
	I0804 00:15:06.851764   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:06.855899   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:06.904392   65087 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:06.904493   65087 ssh_runner.go:195] Run: crio --version
	I0804 00:15:06.932866   65087 ssh_runner.go:195] Run: crio --version
	I0804 00:15:06.963071   65087 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0804 00:15:05.195984   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Start
	I0804 00:15:05.196175   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring networks are active...
	I0804 00:15:05.196904   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring network default is active
	I0804 00:15:05.197256   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Ensuring network mk-default-k8s-diff-port-969068 is active
	I0804 00:15:05.197709   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Getting domain xml...
	I0804 00:15:05.198474   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Creating domain...
	I0804 00:15:06.489009   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting to get IP...
	I0804 00:15:06.490137   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.490569   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.490641   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:06.490549   66290 retry.go:31] will retry after 298.701839ms: waiting for machine to come up
	I0804 00:15:06.791467   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.791938   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:06.791960   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:06.791894   66290 retry.go:31] will retry after 373.395742ms: waiting for machine to come up
	I0804 00:15:07.166622   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.167108   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.167139   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:07.167048   66290 retry.go:31] will retry after 404.799649ms: waiting for machine to come up
	I0804 00:15:06.995779   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.495822   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:07.995970   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.495870   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:08.996379   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.495852   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:09.995819   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.495912   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:10.996591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:11.495964   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:06.964314   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetIP
	I0804 00:15:06.967088   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:06.967517   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:15:06.967547   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:15:06.967787   65087 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:06.973133   65087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:06.990153   65087 kubeadm.go:883] updating cluster {Name:no-preload-118016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:06.990339   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.297536   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.591746   65087 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0804 00:15:07.874720   65087 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0804 00:15:07.874798   65087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:07.914104   65087 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0804 00:15:07.914127   65087 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-rc.0 registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 registry.k8s.io/kube-scheduler:v1.31.0-rc.0 registry.k8s.io/kube-proxy:v1.31.0-rc.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:15:07.914172   65087 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:07.914212   65087 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:07.914237   65087 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0804 00:15:07.914253   65087 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:07.914324   65087 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:07.914374   65087 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:07.914225   65087 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:07.914374   65087 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:07.915814   65087 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:07.915833   65087 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:07.915838   65087 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:07.915816   65087 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:07.915814   65087 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0804 00:15:07.915882   65087 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:07.915962   65087 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:07.916150   65087 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.048225   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.050828   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.051873   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.056880   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.087643   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.091720   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0804 00:15:08.116485   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.173591   65087 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" does not exist at hash "fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c" in container runtime
	I0804 00:15:08.173642   65087 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.173686   65087 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0804 00:15:08.173704   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.173725   65087 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.173777   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.191254   65087 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" does not exist at hash "c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0" in container runtime
	I0804 00:15:08.191298   65087 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.191352   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.195238   65087 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" does not exist at hash "0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c" in container runtime
	I0804 00:15:08.195290   65087 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.195340   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.246005   65087 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-rc.0" does not exist at hash "41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318" in container runtime
	I0804 00:15:08.246048   65087 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.246100   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.336855   65087 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0804 00:15:08.336936   65087 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.336945   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0804 00:15:08.336965   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:08.337078   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0804 00:15:08.337120   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0804 00:15:08.337161   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0804 00:15:08.337207   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0804 00:15:08.425270   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.425297   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0804 00:15:08.425296   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:08.425455   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.425522   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:08.458378   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0804 00:15:08.458520   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:08.460719   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:08.460827   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:08.460889   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0804 00:15:08.460983   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:08.492690   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0804 00:15:08.492789   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492808   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.492839   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:08.492852   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0
	I0804 00:15:08.492863   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492932   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0 (exists)
	I0804 00:15:08.492976   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0 (exists)
	I0804 00:15:08.493036   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0804 00:15:08.763401   65087 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:11.063302   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-rc.0: (2.570424927s)
	I0804 00:15:11.063326   65087 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0: (2.570469177s)
	I0804 00:15:11.063341   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 from cache
	I0804 00:15:11.063348   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0804 00:15:11.063355   65087 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:11.063377   65087 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.299939136s)
	I0804 00:15:11.063414   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0804 00:15:11.063438   65087 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0804 00:15:11.063468   65087 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:11.063516   65087 ssh_runner.go:195] Run: which crictl
	I0804 00:15:07.573639   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.574103   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:07.574150   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:07.574068   66290 retry.go:31] will retry after 552.033422ms: waiting for machine to come up
	I0804 00:15:08.127755   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.128317   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.128345   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:08.128254   66290 retry.go:31] will retry after 601.661676ms: waiting for machine to come up
	I0804 00:15:08.731160   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.731571   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:08.731596   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:08.731526   66290 retry.go:31] will retry after 899.954536ms: waiting for machine to come up
	I0804 00:15:09.632769   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:09.633217   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:09.633275   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:09.633188   66290 retry.go:31] will retry after 1.096119877s: waiting for machine to come up
	I0804 00:15:10.731586   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:10.732092   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:10.732116   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:10.732062   66290 retry.go:31] will retry after 1.09033143s: waiting for machine to come up
	I0804 00:15:11.824287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:11.824697   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:11.824723   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:11.824648   66290 retry.go:31] will retry after 1.458040473s: waiting for machine to come up
	I0804 00:15:11.996494   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.496005   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:12.996429   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.496310   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:13.996525   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.495995   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.996172   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.495809   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:15.996016   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:16.496210   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:14.840723   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.777281435s)
	I0804 00:15:14.840759   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0804 00:15:14.840758   65087 ssh_runner.go:235] Completed: which crictl: (3.777229082s)
	I0804 00:15:14.840769   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:14.840815   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0
	I0804 00:15:14.840815   65087 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:14.894482   65087 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0804 00:15:14.894607   65087 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:16.729218   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-rc.0: (1.888374505s)
	I0804 00:15:16.729270   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 from cache
	I0804 00:15:16.729277   65087 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.834630766s)
	I0804 00:15:16.729304   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:16.729312   65087 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0804 00:15:16.729368   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0
	I0804 00:15:13.284961   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:13.285403   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:13.285435   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:13.285332   66290 retry.go:31] will retry after 2.307816709s: waiting for machine to come up
	I0804 00:15:15.594435   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:15.594855   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:15.594885   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:15.594804   66290 retry.go:31] will retry after 2.83542957s: waiting for machine to come up
	I0804 00:15:16.996765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.496069   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:17.995828   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.495847   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:18.996276   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.496155   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.996708   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:20.996145   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:21.496193   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:19.031187   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-rc.0: (2.301792704s)
	I0804 00:15:19.031309   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 from cache
	I0804 00:15:19.031343   65087 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:19.031389   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0
	I0804 00:15:20.493093   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-rc.0: (1.461677557s)
	I0804 00:15:20.493134   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 from cache
	I0804 00:15:20.493152   65087 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:20.493202   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0804 00:15:18.433690   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:18.434156   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:18.434188   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:18.434105   66290 retry.go:31] will retry after 2.563856777s: waiting for machine to come up
	I0804 00:15:20.999804   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:21.000275   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | unable to find current IP address of domain default-k8s-diff-port-969068 in network mk-default-k8s-diff-port-969068
	I0804 00:15:21.000307   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | I0804 00:15:21.000236   66290 retry.go:31] will retry after 3.783170851s: waiting for machine to come up
	I0804 00:15:26.095635   64502 start.go:364] duration metric: took 52.776761645s to acquireMachinesLock for "embed-certs-877598"
	I0804 00:15:26.095695   64502 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:15:26.095703   64502 fix.go:54] fixHost starting: 
	I0804 00:15:26.096104   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:26.096143   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:26.113770   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0804 00:15:26.114303   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:26.114742   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:15:26.114768   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:26.115137   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:26.115330   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:26.115508   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:15:26.117156   64502 fix.go:112] recreateIfNeeded on embed-certs-877598: state=Stopped err=<nil>
	I0804 00:15:26.117179   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	W0804 00:15:26.117343   64502 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:15:26.119743   64502 out.go:177] * Restarting existing kvm2 VM for "embed-certs-877598" ...
	I0804 00:15:21.996520   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.495922   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.995766   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.495923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:23.995770   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.496788   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:24.996759   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:25.996017   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.496445   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:22.363529   65087 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.870304087s)
	I0804 00:15:22.363559   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0804 00:15:22.363573   65087 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:22.363618   65087 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0804 00:15:23.009879   65087 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19364-9607/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0804 00:15:23.009924   65087 cache_images.go:123] Successfully loaded all cached images
	I0804 00:15:23.009932   65087 cache_images.go:92] duration metric: took 15.095790334s to LoadCachedImages
	I0804 00:15:23.009946   65087 kubeadm.go:934] updating node { 192.168.61.137 8443 v1.31.0-rc.0 crio true true} ...
	I0804 00:15:23.010145   65087 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-118016 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:23.010230   65087 ssh_runner.go:195] Run: crio config
	I0804 00:15:23.057968   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:15:23.057991   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:23.058002   65087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:23.058022   65087 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.137 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-118016 NodeName:no-preload-118016 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:23.058149   65087 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-118016"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:23.058210   65087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0804 00:15:23.068635   65087 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:23.068713   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:23.077867   65087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0804 00:15:23.094220   65087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0804 00:15:23.110798   65087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0804 00:15:23.132230   65087 ssh_runner.go:195] Run: grep 192.168.61.137	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:23.136622   65087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:23.149229   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:23.284623   65087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:23.309115   65087 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016 for IP: 192.168.61.137
	I0804 00:15:23.309212   65087 certs.go:194] generating shared ca certs ...
	I0804 00:15:23.309242   65087 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:23.309451   65087 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:23.309509   65087 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:23.309525   65087 certs.go:256] generating profile certs ...
	I0804 00:15:23.309633   65087 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.key
	I0804 00:15:23.309718   65087 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.key.794a08a1
	I0804 00:15:23.309775   65087 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.key
	I0804 00:15:23.309951   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:23.309992   65087 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:23.310006   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:23.310050   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:23.310084   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:23.310125   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:23.310186   65087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:23.310811   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:23.346479   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:23.390508   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:23.419626   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:23.453891   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 00:15:23.481597   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:23.507749   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:23.537567   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:15:23.565469   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:23.590844   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:23.618748   65087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:23.645921   65087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:23.664034   65087 ssh_runner.go:195] Run: openssl version
	I0804 00:15:23.670083   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:23.681080   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.685717   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.685777   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:23.691573   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:23.702260   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:23.713185   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.717747   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.717803   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:23.723598   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:23.734445   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:23.745394   65087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.750239   65087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.750312   65087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:23.756471   65087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:23.767795   65087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:23.772483   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:23.778613   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:23.784560   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:23.790455   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:23.796260   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:23.802405   65087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:23.808623   65087 kubeadm.go:392] StartCluster: {Name:no-preload-118016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:no-preload-118016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:23.808710   65087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:23.808753   65087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:23.857908   65087 cri.go:89] found id: ""
	I0804 00:15:23.857983   65087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:23.868694   65087 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:23.868717   65087 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:23.868789   65087 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:23.878826   65087 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:23.879879   65087 kubeconfig.go:125] found "no-preload-118016" server: "https://192.168.61.137:8443"
	I0804 00:15:23.882653   65087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:23.893441   65087 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.137
	I0804 00:15:23.893475   65087 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:23.893489   65087 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:23.893533   65087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:23.933954   65087 cri.go:89] found id: ""
	I0804 00:15:23.934026   65087 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:23.951080   65087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:23.962250   65087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:23.962274   65087 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:23.962327   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:15:23.971760   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:23.971817   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:23.981767   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:15:23.991443   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:23.991494   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:24.001911   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:15:24.011927   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:24.011988   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:24.022349   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:15:24.032305   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:24.032371   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:24.042416   65087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:24.052403   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:24.163413   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.106900   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.323496   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.410928   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:25.569137   65087 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:25.569221   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.069288   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.570343   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:26.615965   65087 api_server.go:72] duration metric: took 1.046825245s to wait for apiserver process to appear ...
	I0804 00:15:26.615997   65087 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:26.616022   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:26.616618   65087 api_server.go:269] stopped: https://192.168.61.137:8443/healthz: Get "https://192.168.61.137:8443/healthz": dial tcp 192.168.61.137:8443: connect: connection refused
	I0804 00:15:24.788329   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.788775   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Found IP for machine: 192.168.39.132
	I0804 00:15:24.788799   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has current primary IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.788811   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Reserving static IP address...
	I0804 00:15:24.789238   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-969068", mac: "52:54:00:60:ac:10", ip: "192.168.39.132"} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.789266   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | skip adding static IP to network mk-default-k8s-diff-port-969068 - found existing host DHCP lease matching {name: "default-k8s-diff-port-969068", mac: "52:54:00:60:ac:10", ip: "192.168.39.132"}
	I0804 00:15:24.789287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Reserved static IP address: 192.168.39.132
	I0804 00:15:24.789303   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Waiting for SSH to be available...
	I0804 00:15:24.789333   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Getting to WaitForSSH function...
	I0804 00:15:24.791371   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.791734   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.791762   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.791904   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Using SSH client type: external
	I0804 00:15:24.791934   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa (-rw-------)
	I0804 00:15:24.791975   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:24.791994   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | About to run SSH command:
	I0804 00:15:24.792010   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | exit 0
	I0804 00:15:24.921420   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:24.921795   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetConfigRaw
	I0804 00:15:24.922375   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:24.925074   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.925403   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.925431   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.925680   65441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/config.json ...
	I0804 00:15:24.925904   65441 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:24.925924   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:24.926120   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:24.928597   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.929006   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:24.929045   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:24.929171   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:24.929334   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:24.929498   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:24.929634   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:24.929814   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:24.930001   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:24.930012   65441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:25.046325   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:25.046355   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.046703   65441 buildroot.go:166] provisioning hostname "default-k8s-diff-port-969068"
	I0804 00:15:25.046733   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.046940   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.049807   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.050383   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.050427   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.050547   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.050739   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.050937   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.051131   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.051296   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.051504   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.051525   65441 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-969068 && echo "default-k8s-diff-port-969068" | sudo tee /etc/hostname
	I0804 00:15:25.182512   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-969068
	
	I0804 00:15:25.182552   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.185673   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.186019   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.186051   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.186241   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.186425   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.186551   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.186660   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.186853   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.187034   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.187051   65441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-969068' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-969068/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-969068' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:25.313435   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:25.313470   65441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:25.313518   65441 buildroot.go:174] setting up certificates
	I0804 00:15:25.313531   65441 provision.go:84] configureAuth start
	I0804 00:15:25.313544   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetMachineName
	I0804 00:15:25.313856   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:25.316883   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.317233   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.317287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.317475   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.319773   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.320180   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.320214   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.320404   65441 provision.go:143] copyHostCerts
	I0804 00:15:25.320459   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:25.320467   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:25.320531   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:25.320666   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:25.320675   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:25.320702   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:25.320769   65441 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:25.320777   65441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:25.320804   65441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:25.320871   65441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-969068 san=[127.0.0.1 192.168.39.132 default-k8s-diff-port-969068 localhost minikube]
	I0804 00:15:25.374535   65441 provision.go:177] copyRemoteCerts
	I0804 00:15:25.374590   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:25.374613   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.377629   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.378047   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.378073   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.378254   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.378478   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.378672   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.378897   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:25.469632   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:25.495826   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0804 00:15:25.527006   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:15:25.557603   65441 provision.go:87] duration metric: took 244.055462ms to configureAuth
	I0804 00:15:25.557637   65441 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:25.557873   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:25.557982   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.560974   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.561339   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.561389   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.561570   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.561740   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.561881   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.562043   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.562248   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.562456   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.562471   65441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:25.835452   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:25.835480   65441 machine.go:97] duration metric: took 909.563441ms to provisionDockerMachine
	I0804 00:15:25.835496   65441 start.go:293] postStartSetup for "default-k8s-diff-port-969068" (driver="kvm2")
	I0804 00:15:25.835512   65441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:25.835541   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:25.835846   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:25.835873   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.838713   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.839124   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.839151   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.839287   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.839465   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.839634   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.839779   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:25.928376   65441 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:25.932472   65441 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:25.932498   65441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:25.932608   65441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:25.932775   65441 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:25.932951   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:25.943100   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:25.969517   65441 start.go:296] duration metric: took 134.003956ms for postStartSetup
	I0804 00:15:25.969567   65441 fix.go:56] duration metric: took 20.799045329s for fixHost
	I0804 00:15:25.969591   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:25.972743   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.973172   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:25.973204   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:25.973342   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:25.973596   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.973768   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:25.973944   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:25.974158   65441 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:25.974330   65441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0804 00:15:25.974343   65441 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:26.095438   65441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730526.053053982
	
	I0804 00:15:26.095462   65441 fix.go:216] guest clock: 1722730526.053053982
	I0804 00:15:26.095472   65441 fix.go:229] Guest: 2024-08-04 00:15:26.053053982 +0000 UTC Remote: 2024-08-04 00:15:25.969572309 +0000 UTC m=+213.641216658 (delta=83.481673ms)
	I0804 00:15:26.095524   65441 fix.go:200] guest clock delta is within tolerance: 83.481673ms
	I0804 00:15:26.095534   65441 start.go:83] releasing machines lock for "default-k8s-diff-port-969068", held for 20.925048627s
	I0804 00:15:26.095570   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.095862   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:26.098718   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.099112   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.099145   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.099305   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.099929   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.100108   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:26.100182   65441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:26.100222   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:26.100347   65441 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:26.100388   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:26.103393   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.103720   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.103942   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.103963   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.104142   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:26.104159   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:26.104243   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:26.104347   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:26.104384   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:26.104499   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:26.104545   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:26.104718   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:26.104728   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:26.104881   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:26.214704   65441 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:26.221287   65441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:26.378021   65441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:26.385673   65441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:26.385751   65441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:26.403073   65441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:26.403104   65441 start.go:495] detecting cgroup driver to use...
	I0804 00:15:26.403193   65441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:26.421108   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:26.435556   65441 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:26.435627   65441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:26.455219   65441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:26.477841   65441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:26.626980   65441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:26.806808   65441 docker.go:233] disabling docker service ...
	I0804 00:15:26.806887   65441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:26.824079   65441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:26.839225   65441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:26.967375   65441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:27.136156   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:27.151822   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:27.173326   65441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:15:27.173404   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.184431   65441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:27.184509   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.194890   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.208349   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.222326   65441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:27.237212   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.249571   65441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.274913   65441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:27.288929   65441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:27.305789   65441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:27.305863   65441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:27.321708   65441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:27.332129   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:27.482279   65441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:27.638388   65441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:27.638465   65441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:27.644607   65441 start.go:563] Will wait 60s for crictl version
	I0804 00:15:27.644665   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:15:27.648663   65441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:27.691731   65441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:27.691824   65441 ssh_runner.go:195] Run: crio --version
	I0804 00:15:27.731365   65441 ssh_runner.go:195] Run: crio --version
	I0804 00:15:27.767416   65441 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:15:26.121074   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Start
	I0804 00:15:26.121263   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring networks are active...
	I0804 00:15:26.122075   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring network default is active
	I0804 00:15:26.122471   64502 main.go:141] libmachine: (embed-certs-877598) Ensuring network mk-embed-certs-877598 is active
	I0804 00:15:26.122884   64502 main.go:141] libmachine: (embed-certs-877598) Getting domain xml...
	I0804 00:15:26.123684   64502 main.go:141] libmachine: (embed-certs-877598) Creating domain...
	I0804 00:15:27.536026   64502 main.go:141] libmachine: (embed-certs-877598) Waiting to get IP...
	I0804 00:15:27.537165   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:27.537650   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:27.537734   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:27.537654   66522 retry.go:31] will retry after 277.473157ms: waiting for machine to come up
	I0804 00:15:27.817330   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:27.817824   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:27.817858   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:27.817788   66522 retry.go:31] will retry after 322.160841ms: waiting for machine to come up
	I0804 00:15:28.141287   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.141818   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.141855   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.141775   66522 retry.go:31] will retry after 325.833359ms: waiting for machine to come up
	I0804 00:15:28.469440   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.469976   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.470015   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.469933   66522 retry.go:31] will retry after 372.304971ms: waiting for machine to come up
	I0804 00:15:28.843604   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:28.844376   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:28.844400   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:28.844297   66522 retry.go:31] will retry after 607.361674ms: waiting for machine to come up
	I0804 00:15:29.453082   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:29.453557   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:29.453586   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:29.453527   66522 retry.go:31] will retry after 615.002468ms: waiting for machine to come up
	I0804 00:15:30.070598   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:30.071112   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:30.071134   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:30.071079   66522 retry.go:31] will retry after 834.292107ms: waiting for machine to come up
	I0804 00:15:27.116719   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.030589   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:30.030625   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:30.030641   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.091459   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:30.091494   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:30.116633   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.149335   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:30.149394   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:30.617009   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:30.622086   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:30.622117   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:31.116320   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:31.125065   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:31.125143   65087 api_server.go:103] status: https://192.168.61.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:31.617091   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:15:31.627142   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0804 00:15:31.636371   65087 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:15:31.636405   65087 api_server.go:131] duration metric: took 5.020400356s to wait for apiserver health ...
	I0804 00:15:31.636414   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:15:31.636420   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:31.638145   65087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:26.996399   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.496810   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:27.995825   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.496395   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:28.996561   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.496735   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:29.996542   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.496406   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:30.996259   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.496307   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.639553   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:15:31.658269   65087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:15:31.685188   65087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:15:31.703581   65087 system_pods.go:59] 8 kube-system pods found
	I0804 00:15:31.703627   65087 system_pods.go:61] "coredns-6f6b679f8f-9vdxc" [fd645695-cc1d-4394-96b0-832f48e9cf26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:15:31.703638   65087 system_pods.go:61] "etcd-no-preload-118016" [a329ecd7-7574-48f4-a776-7b7c05465f8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:15:31.703649   65087 system_pods.go:61] "kube-apiserver-no-preload-118016" [43d313aa-1844-488d-8925-b744f504323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:15:31.703661   65087 system_pods.go:61] "kube-controller-manager-no-preload-118016" [d56a5461-29d3-47f7-95df-a7fc6b52ca2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:15:31.703669   65087 system_pods.go:61] "kube-proxy-8bcg7" [c2b43118-5216-41bf-9f16-00f11ca1eab5] Running
	I0804 00:15:31.703678   65087 system_pods.go:61] "kube-scheduler-no-preload-118016" [53dc528c-2f00-4ca6-86c6-d02f4533229d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:15:31.703687   65087 system_pods.go:61] "metrics-server-6867b74b74-5xfgz" [c558b60d-3816-406a-addb-96cd42266bd1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:15:31.703698   65087 system_pods.go:61] "storage-provisioner" [1edb442e-272f-4ef7-b3fb-7c46b915c61a] Running
	I0804 00:15:31.703707   65087 system_pods.go:74] duration metric: took 18.49198ms to wait for pod list to return data ...
	I0804 00:15:31.703721   65087 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:15:31.712702   65087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:15:31.712735   65087 node_conditions.go:123] node cpu capacity is 2
	I0804 00:15:31.712748   65087 node_conditions.go:105] duration metric: took 9.019815ms to run NodePressure ...
	I0804 00:15:31.712773   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:27.768972   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetIP
	I0804 00:15:27.772437   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:27.772860   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:27.772903   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:27.773135   65441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:27.777834   65441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:27.792279   65441 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:27.792437   65441 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:15:27.792493   65441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:27.833330   65441 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:15:27.833453   65441 ssh_runner.go:195] Run: which lz4
	I0804 00:15:27.837836   65441 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:15:27.842093   65441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:15:27.842128   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:15:29.410529   65441 crio.go:462] duration metric: took 1.572735301s to copy over tarball
	I0804 00:15:29.410610   65441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:15:32.062492   65441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.651848511s)
	I0804 00:15:32.062533   65441 crio.go:469] duration metric: took 2.651972207s to extract the tarball
	I0804 00:15:32.062545   65441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:15:32.100003   65441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:32.144166   65441 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:15:32.144192   65441 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:15:32.144201   65441 kubeadm.go:934] updating node { 192.168.39.132 8444 v1.30.3 crio true true} ...
	I0804 00:15:32.144327   65441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-969068 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:32.144434   65441 ssh_runner.go:195] Run: crio config
	I0804 00:15:32.197593   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:15:32.197618   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:32.197630   65441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:32.197658   65441 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-969068 NodeName:default-k8s-diff-port-969068 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:32.197862   65441 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.132
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-969068"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:32.197937   65441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:15:32.208469   65441 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:32.208551   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:32.218194   65441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0804 00:15:32.237731   65441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:15:32.259599   65441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0804 00:15:32.281113   65441 ssh_runner.go:195] Run: grep 192.168.39.132	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:32.285559   65441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:32.298722   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:30.906612   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:30.907056   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:30.907086   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:30.907012   66522 retry.go:31] will retry after 1.489076061s: waiting for machine to come up
	I0804 00:15:32.397239   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:32.397614   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:32.397642   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:32.397568   66522 retry.go:31] will retry after 1.737097329s: waiting for machine to come up
	I0804 00:15:34.135859   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:34.136363   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:34.136393   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:34.136321   66522 retry.go:31] will retry after 2.154712298s: waiting for machine to come up
	I0804 00:15:31.996780   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.496164   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:32.996444   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.496838   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:33.996533   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.496300   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:34.996772   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.495937   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.996834   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.496277   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:31.982926   65087 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:15:31.989888   65087 kubeadm.go:739] kubelet initialised
	I0804 00:15:31.989926   65087 kubeadm.go:740] duration metric: took 6.968445ms waiting for restarted kubelet to initialise ...
	I0804 00:15:31.989938   65087 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:31.997210   65087 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:34.748142   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:32.432400   65441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:32.450525   65441 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068 for IP: 192.168.39.132
	I0804 00:15:32.450548   65441 certs.go:194] generating shared ca certs ...
	I0804 00:15:32.450571   65441 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:32.450738   65441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:32.450801   65441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:32.450815   65441 certs.go:256] generating profile certs ...
	I0804 00:15:32.450922   65441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.key
	I0804 00:15:32.451000   65441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.key.a17bd5dd
	I0804 00:15:32.451053   65441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.key
	I0804 00:15:32.451199   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:32.451242   65441 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:32.451255   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:32.451279   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:32.451303   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:32.451326   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:32.451365   65441 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:32.451910   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:32.505178   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:32.557546   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:32.596512   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:32.635476   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0804 00:15:32.687156   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:32.716537   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:32.746312   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:15:32.777788   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:32.806730   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:32.835822   65441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:32.864241   65441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:32.886754   65441 ssh_runner.go:195] Run: openssl version
	I0804 00:15:32.893177   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:32.904847   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.909871   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.909937   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:32.916357   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:32.927322   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:32.939447   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.944221   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.944275   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:32.950218   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:32.966506   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:32.981288   65441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.986761   65441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.986831   65441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:32.993077   65441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:33.007428   65441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:33.013290   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:33.019997   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:33.026423   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:33.033004   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:33.039205   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:33.045367   65441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:33.051462   65441 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-969068 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-969068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:33.051546   65441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:33.051605   65441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:33.094354   65441 cri.go:89] found id: ""
	I0804 00:15:33.094433   65441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:33.105416   65441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:33.105439   65441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:33.105480   65441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:33.115838   65441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:33.117466   65441 kubeconfig.go:125] found "default-k8s-diff-port-969068" server: "https://192.168.39.132:8444"
	I0804 00:15:33.120806   65441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:33.130533   65441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.132
	I0804 00:15:33.130567   65441 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:33.130579   65441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:33.130628   65441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:33.178718   65441 cri.go:89] found id: ""
	I0804 00:15:33.178813   65441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:33.199000   65441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:33.212169   65441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:33.212188   65441 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:33.212255   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0804 00:15:33.225192   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:33.225254   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:33.239194   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0804 00:15:33.252402   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:33.252470   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:33.265198   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0804 00:15:33.276564   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:33.276636   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:33.288785   65441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0804 00:15:33.299848   65441 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:33.299904   65441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:33.311115   65441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:33.322121   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:33.442578   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.526815   65441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084197731s)
	I0804 00:15:34.526857   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.803105   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.893681   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:34.978573   65441 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:34.978668   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.479179   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:35.979520   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:36.063056   65441 api_server.go:72] duration metric: took 1.084463955s to wait for apiserver process to appear ...
	I0804 00:15:36.063161   65441 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:36.063203   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:36.063755   65441 api_server.go:269] stopped: https://192.168.39.132:8444/healthz: Get "https://192.168.39.132:8444/healthz": dial tcp 192.168.39.132:8444: connect: connection refused
	I0804 00:15:36.563501   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:36.293051   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:36.293675   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:36.293710   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:36.293604   66522 retry.go:31] will retry after 2.826050203s: waiting for machine to come up
	I0804 00:15:39.120961   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:39.121602   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:39.121628   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:39.121554   66522 retry.go:31] will retry after 2.710829438s: waiting for machine to come up
	I0804 00:15:36.996761   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.495885   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.995785   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.496550   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:38.996645   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.495814   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:39.995851   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.496685   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:40.995896   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:41.495864   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:37.005216   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:39.505397   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:39.405829   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:39.405895   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:39.405913   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:39.433026   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:39.433063   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:39.563242   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:39.568554   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:39.568591   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:40.064078   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:40.085940   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:40.085978   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:40.564041   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:40.569785   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:40.569812   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:41.063334   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:41.068113   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:41.068135   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:41.563691   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:41.569214   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:41.569248   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:42.063737   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:42.068227   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:42.068260   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:42.563309   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:42.567740   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:42.567775   65441 api_server.go:103] status: https://192.168.39.132:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:43.063306   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:15:43.067611   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 200:
	ok
	I0804 00:15:43.073842   65441 api_server.go:141] control plane version: v1.30.3
	I0804 00:15:43.073868   65441 api_server.go:131] duration metric: took 7.010684682s to wait for apiserver health ...
	I0804 00:15:43.073879   65441 cni.go:84] Creating CNI manager for ""
	I0804 00:15:43.073887   65441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:43.075779   65441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:43.077123   65441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:15:43.088611   65441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:15:43.109845   65441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:15:43.119204   65441 system_pods.go:59] 8 kube-system pods found
	I0804 00:15:43.119235   65441 system_pods.go:61] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:15:43.119246   65441 system_pods.go:61] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:15:43.119259   65441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:15:43.119269   65441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:15:43.119275   65441 system_pods.go:61] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:15:43.119282   65441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:15:43.119300   65441 system_pods.go:61] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:15:43.119309   65441 system_pods.go:61] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:15:43.119317   65441 system_pods.go:74] duration metric: took 9.453775ms to wait for pod list to return data ...
	I0804 00:15:43.119328   65441 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:15:43.122493   65441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:15:43.122516   65441 node_conditions.go:123] node cpu capacity is 2
	I0804 00:15:43.122528   65441 node_conditions.go:105] duration metric: took 3.191087ms to run NodePressure ...
	I0804 00:15:43.122547   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:43.391258   65441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:15:43.395252   65441 kubeadm.go:739] kubelet initialised
	I0804 00:15:43.395274   65441 kubeadm.go:740] duration metric: took 3.992079ms waiting for restarted kubelet to initialise ...
	I0804 00:15:43.395282   65441 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:43.400173   65441 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.404618   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.404645   65441 pod_ready.go:81] duration metric: took 4.449232ms for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.404665   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.404675   65441 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.409134   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.409165   65441 pod_ready.go:81] duration metric: took 4.471898ms for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.409178   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.409190   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.414342   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.414362   65441 pod_ready.go:81] duration metric: took 5.160435ms for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.414374   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.414383   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.513956   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.513987   65441 pod_ready.go:81] duration metric: took 99.59507ms for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.514003   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.514033   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:43.913592   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-proxy-zz7fr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.913619   65441 pod_ready.go:81] duration metric: took 399.572927ms for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:43.913628   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-proxy-zz7fr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:43.913634   65441 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.313833   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.313864   65441 pod_ready.go:81] duration metric: took 400.220214ms for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:44.313878   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.313886   65441 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.713583   65441 pod_ready.go:97] node "default-k8s-diff-port-969068" hosting pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.713616   65441 pod_ready.go:81] duration metric: took 399.716432ms for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	E0804 00:15:44.713636   65441 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-969068" hosting pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:44.713647   65441 pod_ready.go:38] duration metric: took 1.318356042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:44.713666   65441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:15:44.725908   65441 ops.go:34] apiserver oom_adj: -16
	I0804 00:15:44.725935   65441 kubeadm.go:597] duration metric: took 11.620489409s to restartPrimaryControlPlane
	I0804 00:15:44.725947   65441 kubeadm.go:394] duration metric: took 11.674491721s to StartCluster
	I0804 00:15:44.725966   65441 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:44.726046   65441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:15:44.728392   65441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:44.728702   65441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.132 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:15:44.728805   65441 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:15:44.728895   65441 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.728942   65441 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.728954   65441 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:15:44.728958   65441 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.728990   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.728967   65441 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-969068"
	I0804 00:15:44.729027   65441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-969068"
	I0804 00:15:44.729039   65441 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.729054   65441 addons.go:243] addon metrics-server should already be in state true
	I0804 00:15:44.729143   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.729436   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729470   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.729515   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729564   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.729598   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.729642   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.728909   65441 config.go:182] Loaded profile config "default-k8s-diff-port-969068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:44.730486   65441 out.go:177] * Verifying Kubernetes components...
	I0804 00:15:44.731972   65441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:44.748737   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0804 00:15:44.749200   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
	I0804 00:15:44.749311   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I0804 00:15:44.749582   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.749691   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.749858   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.750128   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750144   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750153   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750171   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750326   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.750347   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.750609   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.750617   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.750810   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.751212   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.751249   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.751286   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.751733   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.751780   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.754574   65441 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-969068"
	W0804 00:15:44.754616   65441 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:15:44.754649   65441 host.go:66] Checking if "default-k8s-diff-port-969068" exists ...
	I0804 00:15:44.755038   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.755080   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.769763   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I0804 00:15:44.770311   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.770828   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.770850   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.771209   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.771371   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.771935   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43081
	I0804 00:15:44.773284   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.773416   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39091
	I0804 00:15:44.773646   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.773854   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.773866   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.773981   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.774227   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.774529   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.774551   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.774665   65441 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:15:44.774711   65441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:15:44.774938   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.775078   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.776166   65441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:15:44.776690   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.777692   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:15:44.777708   65441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:15:44.777724   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.778473   65441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:15:41.833728   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:41.834246   64502 main.go:141] libmachine: (embed-certs-877598) DBG | unable to find current IP address of domain embed-certs-877598 in network mk-embed-certs-877598
	I0804 00:15:41.834270   64502 main.go:141] libmachine: (embed-certs-877598) DBG | I0804 00:15:41.834210   66522 retry.go:31] will retry after 2.891635961s: waiting for machine to come up
	I0804 00:15:44.727424   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.727895   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has current primary IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.727919   64502 main.go:141] libmachine: (embed-certs-877598) Found IP for machine: 192.168.50.140
	I0804 00:15:44.727943   64502 main.go:141] libmachine: (embed-certs-877598) Reserving static IP address...
	I0804 00:15:44.728570   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "embed-certs-877598", mac: "52:54:00:86:aa:38", ip: "192.168.50.140"} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.728602   64502 main.go:141] libmachine: (embed-certs-877598) DBG | skip adding static IP to network mk-embed-certs-877598 - found existing host DHCP lease matching {name: "embed-certs-877598", mac: "52:54:00:86:aa:38", ip: "192.168.50.140"}
	I0804 00:15:44.728617   64502 main.go:141] libmachine: (embed-certs-877598) Reserved static IP address: 192.168.50.140
	I0804 00:15:44.728634   64502 main.go:141] libmachine: (embed-certs-877598) Waiting for SSH to be available...
	I0804 00:15:44.728648   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Getting to WaitForSSH function...
	I0804 00:15:44.731684   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.732102   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.732137   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.732388   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Using SSH client type: external
	I0804 00:15:44.732408   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa (-rw-------)
	I0804 00:15:44.732438   64502 main.go:141] libmachine: (embed-certs-877598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:15:44.732448   64502 main.go:141] libmachine: (embed-certs-877598) DBG | About to run SSH command:
	I0804 00:15:44.732462   64502 main.go:141] libmachine: (embed-certs-877598) DBG | exit 0
	I0804 00:15:44.873689   64502 main.go:141] libmachine: (embed-certs-877598) DBG | SSH cmd err, output: <nil>: 
	I0804 00:15:44.874033   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetConfigRaw
	I0804 00:15:44.874716   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:44.877406   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.877823   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.877855   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.878130   64502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/config.json ...
	I0804 00:15:44.878358   64502 machine.go:94] provisionDockerMachine start ...
	I0804 00:15:44.878382   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:44.878563   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:44.880862   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.881215   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:44.881253   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:44.881427   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:44.881597   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:44.881785   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:44.881958   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:44.882150   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:44.882381   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:44.882399   64502 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:15:44.998143   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0804 00:15:44.998172   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:44.998534   64502 buildroot.go:166] provisioning hostname "embed-certs-877598"
	I0804 00:15:44.998564   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:44.998761   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.001998   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.002508   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.002545   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.002691   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.002847   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.003026   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.003175   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.003388   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.003592   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.003606   64502 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-877598 && echo "embed-certs-877598" | sudo tee /etc/hostname
	I0804 00:15:45.142065   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-877598
	
	I0804 00:15:45.142123   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.145427   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.145858   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.145912   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.146133   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.146279   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.146422   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.146595   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.146778   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.146991   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.147007   64502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-877598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-877598/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-877598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:15:45.275711   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:15:45.275748   64502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-9607/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-9607/.minikube}
	I0804 00:15:45.275775   64502 buildroot.go:174] setting up certificates
	I0804 00:15:45.275790   64502 provision.go:84] configureAuth start
	I0804 00:15:45.275804   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetMachineName
	I0804 00:15:45.276145   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:45.279645   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.280141   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.280166   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.280298   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.283135   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.283495   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.283521   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.283693   64502 provision.go:143] copyHostCerts
	I0804 00:15:45.283754   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem, removing ...
	I0804 00:15:45.283767   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem
	I0804 00:15:45.283837   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/ca.pem (1082 bytes)
	I0804 00:15:45.283954   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem, removing ...
	I0804 00:15:45.283975   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem
	I0804 00:15:45.284004   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/cert.pem (1123 bytes)
	I0804 00:15:45.284168   64502 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem, removing ...
	I0804 00:15:45.284182   64502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem
	I0804 00:15:45.284214   64502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-9607/.minikube/key.pem (1679 bytes)
	I0804 00:15:45.284280   64502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem org=jenkins.embed-certs-877598 san=[127.0.0.1 192.168.50.140 embed-certs-877598 localhost minikube]
	I0804 00:15:45.484805   64502 provision.go:177] copyRemoteCerts
	I0804 00:15:45.484861   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:15:45.484883   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.488177   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.488586   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.488621   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.488852   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.489032   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.489191   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.489340   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:45.580782   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:15:45.612118   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0804 00:15:45.638201   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:15:45.665741   64502 provision.go:87] duration metric: took 389.935703ms to configureAuth
	I0804 00:15:45.665778   64502 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:15:45.666008   64502 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:15:45.666110   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.668942   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.669312   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.669343   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.669589   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.669812   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.669995   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.670158   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.670317   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:45.670501   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:45.670522   64502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:15:44.779708   65441 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:15:44.779730   65441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:15:44.779747   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.780637   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.781098   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.781120   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.781219   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.781424   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.781593   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.781753   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.783024   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.783459   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.783479   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.783895   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.784054   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.784219   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.784343   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.793057   65441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
	I0804 00:15:44.793581   65441 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:15:44.794075   65441 main.go:141] libmachine: Using API Version  1
	I0804 00:15:44.794094   65441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:15:44.794413   65441 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:15:44.794586   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetState
	I0804 00:15:44.796274   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .DriverName
	I0804 00:15:44.796609   65441 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:15:44.796623   65441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:15:44.796643   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHHostname
	I0804 00:15:44.799445   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.799990   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:ac:10", ip: ""} in network mk-default-k8s-diff-port-969068: {Iface:virbr2 ExpiryTime:2024-08-04 01:15:16 +0000 UTC Type:0 Mac:52:54:00:60:ac:10 Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:default-k8s-diff-port-969068 Clientid:01:52:54:00:60:ac:10}
	I0804 00:15:44.800254   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | domain default-k8s-diff-port-969068 has defined IP address 192.168.39.132 and MAC address 52:54:00:60:ac:10 in network mk-default-k8s-diff-port-969068
	I0804 00:15:44.800698   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHPort
	I0804 00:15:44.800864   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHKeyPath
	I0804 00:15:44.800974   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .GetSSHUsername
	I0804 00:15:44.801305   65441 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/default-k8s-diff-port-969068/id_rsa Username:docker}
	I0804 00:15:44.962413   65441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:44.983596   65441 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-969068" to be "Ready" ...
	I0804 00:15:45.057238   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:15:45.057261   65441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:15:45.082722   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:15:45.082745   65441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:15:45.088213   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:15:45.115230   65441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:15:45.115261   65441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:15:45.115325   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:15:45.164676   65441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:15:45.502008   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.502040   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.502381   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:45.502440   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.502463   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:45.502476   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.502484   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.502701   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.502718   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:45.510043   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:45.510065   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:45.510305   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:45.510353   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:45.510364   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.217233   65441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.101870491s)
	I0804 00:15:46.217295   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.217308   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.217585   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.217609   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.217625   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.217652   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.217719   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.218073   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.218091   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.218104   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.255756   65441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.091044347s)
	I0804 00:15:46.255802   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.255819   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.256053   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.256093   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.256101   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.256110   65441 main.go:141] libmachine: Making call to close driver server
	I0804 00:15:46.256117   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) Calling .Close
	I0804 00:15:46.256412   65441 main.go:141] libmachine: (default-k8s-diff-port-969068) DBG | Closing plugin on server side
	I0804 00:15:46.256446   65441 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:15:46.256454   65441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:15:46.256465   65441 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-969068"
	I0804 00:15:46.258662   65441 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0804 00:15:41.995808   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.496612   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.996566   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.495812   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:43.996095   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.495902   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:44.996724   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.495854   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:45.996354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:46.496185   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:42.005235   65087 pod_ready.go:102] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:44.003809   65087 pod_ready.go:92] pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.003847   65087 pod_ready.go:81] duration metric: took 12.006609818s for pod "coredns-6f6b679f8f-9vdxc" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.003861   65087 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.009518   65087 pod_ready.go:92] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.009541   65087 pod_ready.go:81] duration metric: took 5.671724ms for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.009554   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.014897   65087 pod_ready.go:92] pod "kube-apiserver-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:44.014923   65087 pod_ready.go:81] duration metric: took 5.360171ms for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:44.014938   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.521943   65087 pod_ready.go:92] pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.521968   65087 pod_ready.go:81] duration metric: took 1.507021563s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.521983   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8bcg7" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.527550   65087 pod_ready.go:92] pod "kube-proxy-8bcg7" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.527575   65087 pod_ready.go:81] duration metric: took 5.585026ms for pod "kube-proxy-8bcg7" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.527588   65087 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.604221   65087 pod_ready.go:92] pod "kube-scheduler-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:45.604245   65087 pod_ready.go:81] duration metric: took 76.648502ms for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:45.604260   65087 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:46.260578   65441 addons.go:510] duration metric: took 1.531768603s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0804 00:15:46.988351   65441 node_ready.go:53] node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:45.985471   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:15:45.985501   64502 machine.go:97] duration metric: took 1.107126695s to provisionDockerMachine
	I0804 00:15:45.985514   64502 start.go:293] postStartSetup for "embed-certs-877598" (driver="kvm2")
	I0804 00:15:45.985527   64502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:15:45.985554   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:45.985928   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:15:45.985962   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:45.989294   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.989699   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:45.989731   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:45.989875   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:45.990079   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:45.990230   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:45.990355   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.085684   64502 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:15:46.091660   64502 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:15:46.091690   64502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/addons for local assets ...
	I0804 00:15:46.091776   64502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-9607/.minikube/files for local assets ...
	I0804 00:15:46.091873   64502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem -> 167952.pem in /etc/ssl/certs
	I0804 00:15:46.092005   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:15:46.102373   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:46.129547   64502 start.go:296] duration metric: took 144.018823ms for postStartSetup
	I0804 00:15:46.129594   64502 fix.go:56] duration metric: took 20.033890858s for fixHost
	I0804 00:15:46.129619   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.132803   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.133154   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.133190   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.133347   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.133580   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.133766   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.134016   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.134242   64502 main.go:141] libmachine: Using SSH client type: native
	I0804 00:15:46.134454   64502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0804 00:15:46.134471   64502 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:15:46.250499   64502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722730546.219077490
	
	I0804 00:15:46.250528   64502 fix.go:216] guest clock: 1722730546.219077490
	I0804 00:15:46.250539   64502 fix.go:229] Guest: 2024-08-04 00:15:46.21907749 +0000 UTC Remote: 2024-08-04 00:15:46.129599456 +0000 UTC m=+355.401502879 (delta=89.478034ms)
	I0804 00:15:46.250567   64502 fix.go:200] guest clock delta is within tolerance: 89.478034ms
	I0804 00:15:46.250575   64502 start.go:83] releasing machines lock for "embed-certs-877598", held for 20.15490553s
	I0804 00:15:46.250609   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.250902   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:46.253782   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.254164   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.254194   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.254376   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.254967   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.255169   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:15:46.255247   64502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:15:46.255307   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.255376   64502 ssh_runner.go:195] Run: cat /version.json
	I0804 00:15:46.255399   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:15:46.260113   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260481   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.260511   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260529   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.260702   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.260870   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.260995   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:46.261022   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:46.261045   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.261182   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:15:46.261208   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.261305   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:15:46.261451   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:15:46.261588   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:15:46.372061   64502 ssh_runner.go:195] Run: systemctl --version
	I0804 00:15:46.378356   64502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:15:46.527705   64502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:15:46.534567   64502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:15:46.534620   64502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:15:46.550801   64502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:15:46.550829   64502 start.go:495] detecting cgroup driver to use...
	I0804 00:15:46.550916   64502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:15:46.568369   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:15:46.583437   64502 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:15:46.583496   64502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:15:46.599267   64502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:15:46.614874   64502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:15:46.734467   64502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:15:46.900868   64502 docker.go:233] disabling docker service ...
	I0804 00:15:46.900941   64502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:15:46.915612   64502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:15:46.929948   64502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:15:47.056637   64502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:15:47.175277   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:15:47.190167   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:15:47.211062   64502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:15:47.211115   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.222459   64502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:15:47.222547   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.232964   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.243663   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.254387   64502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:15:47.266424   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.277323   64502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.296078   64502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:15:47.307058   64502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:15:47.317138   64502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:15:47.317223   64502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:15:47.332104   64502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:15:47.342965   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:47.464208   64502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:15:47.620127   64502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:15:47.620196   64502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:15:47.625103   64502 start.go:563] Will wait 60s for crictl version
	I0804 00:15:47.625165   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:15:47.628942   64502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:15:47.668593   64502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:15:47.668686   64502 ssh_runner.go:195] Run: crio --version
	I0804 00:15:47.700313   64502 ssh_runner.go:195] Run: crio --version
	I0804 00:15:47.737281   64502 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:15:47.738730   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetIP
	I0804 00:15:47.741698   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:47.742098   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:15:47.742144   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:15:47.742310   64502 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0804 00:15:47.746883   64502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:47.760111   64502 kubeadm.go:883] updating cluster {Name:embed-certs-877598 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:15:47.760247   64502 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:15:47.760305   64502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:47.801700   64502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:15:47.801766   64502 ssh_runner.go:195] Run: which lz4
	I0804 00:15:47.806337   64502 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:15:47.811010   64502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:15:47.811050   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:15:49.359157   64502 crio.go:462] duration metric: took 1.552864688s to copy over tarball
	I0804 00:15:49.359245   64502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:15:46.996215   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.496634   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.996278   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.496184   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:48.996616   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.496240   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:49.996433   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.495914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:50.996600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:51.496459   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:47.611474   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:49.611879   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:51.616732   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:48.988818   65441 node_ready.go:53] node "default-k8s-diff-port-969068" has status "Ready":"False"
	I0804 00:15:49.988196   65441 node_ready.go:49] node "default-k8s-diff-port-969068" has status "Ready":"True"
	I0804 00:15:49.988220   65441 node_ready.go:38] duration metric: took 5.004585481s for node "default-k8s-diff-port-969068" to be "Ready" ...
	I0804 00:15:49.988229   65441 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:15:49.994536   65441 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:50.001200   65441 pod_ready.go:92] pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:50.001229   65441 pod_ready.go:81] duration metric: took 6.665744ms for pod "coredns-7db6d8ff4d-b8v28" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:50.001243   65441 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:52.009436   65441 pod_ready.go:102] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:51.759772   64502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.400487256s)
	I0804 00:15:51.759836   64502 crio.go:469] duration metric: took 2.40064418s to extract the tarball
	I0804 00:15:51.759848   64502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:15:51.800038   64502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:15:51.845098   64502 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:15:51.845124   64502 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:15:51.845134   64502 kubeadm.go:934] updating node { 192.168.50.140 8443 v1.30.3 crio true true} ...
	I0804 00:15:51.845258   64502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-877598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:15:51.845339   64502 ssh_runner.go:195] Run: crio config
	I0804 00:15:51.895019   64502 cni.go:84] Creating CNI manager for ""
	I0804 00:15:51.895039   64502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:15:51.895048   64502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:15:51.895067   64502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.140 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-877598 NodeName:embed-certs-877598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:15:51.895202   64502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-877598"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:15:51.895272   64502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:15:51.906363   64502 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:15:51.906426   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:15:51.917727   64502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0804 00:15:51.936370   64502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:15:51.953894   64502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0804 00:15:51.972910   64502 ssh_runner.go:195] Run: grep 192.168.50.140	control-plane.minikube.internal$ /etc/hosts
	I0804 00:15:51.977288   64502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:15:51.990992   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:15:52.115808   64502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:15:52.133326   64502 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598 for IP: 192.168.50.140
	I0804 00:15:52.133373   64502 certs.go:194] generating shared ca certs ...
	I0804 00:15:52.133396   64502 certs.go:226] acquiring lock for ca certs: {Name:mk9b604da46910ef8c4687f23e694acce87e3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:15:52.133564   64502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key
	I0804 00:15:52.133613   64502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key
	I0804 00:15:52.133628   64502 certs.go:256] generating profile certs ...
	I0804 00:15:52.133736   64502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/client.key
	I0804 00:15:52.133824   64502 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.key.5511d337
	I0804 00:15:52.133873   64502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.key
	I0804 00:15:52.134013   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem (1338 bytes)
	W0804 00:15:52.134077   64502 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795_empty.pem, impossibly tiny 0 bytes
	I0804 00:15:52.134091   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:15:52.134130   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:15:52.134168   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:15:52.134200   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/certs/key.pem (1679 bytes)
	I0804 00:15:52.134256   64502 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem (1708 bytes)
	I0804 00:15:52.134880   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:15:52.175985   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:15:52.209458   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:15:52.239097   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:15:52.271037   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0804 00:15:52.317594   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:15:52.353485   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:15:52.382159   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/embed-certs-877598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:15:52.407478   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/certs/16795.pem --> /usr/share/ca-certificates/16795.pem (1338 bytes)
	I0804 00:15:52.433103   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/ssl/certs/167952.pem --> /usr/share/ca-certificates/167952.pem (1708 bytes)
	I0804 00:15:52.457233   64502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:15:52.481534   64502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:15:52.500482   64502 ssh_runner.go:195] Run: openssl version
	I0804 00:15:52.509021   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16795.pem && ln -fs /usr/share/ca-certificates/16795.pem /etc/ssl/certs/16795.pem"
	I0804 00:15:52.522431   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.527125   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:02 /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.527184   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16795.pem
	I0804 00:15:52.533310   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16795.pem /etc/ssl/certs/51391683.0"
	I0804 00:15:52.546085   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167952.pem && ln -fs /usr/share/ca-certificates/167952.pem /etc/ssl/certs/167952.pem"
	I0804 00:15:52.557781   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.562516   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:02 /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.562587   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167952.pem
	I0804 00:15:52.568403   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167952.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:15:52.580431   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:15:52.592706   64502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.597280   64502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.597382   64502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:15:52.603284   64502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:15:52.616100   64502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:15:52.621422   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:15:52.631811   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:15:52.639130   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:15:52.646159   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:15:52.652721   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:15:52.659459   64502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:15:52.665864   64502 kubeadm.go:392] StartCluster: {Name:embed-certs-877598 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-877598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:15:52.665991   64502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:15:52.666044   64502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:52.711272   64502 cri.go:89] found id: ""
	I0804 00:15:52.711346   64502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:15:52.722294   64502 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0804 00:15:52.722321   64502 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0804 00:15:52.722380   64502 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0804 00:15:52.733148   64502 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:15:52.734706   64502 kubeconfig.go:125] found "embed-certs-877598" server: "https://192.168.50.140:8443"
	I0804 00:15:52.737995   64502 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0804 00:15:52.749941   64502 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.140
	I0804 00:15:52.749986   64502 kubeadm.go:1160] stopping kube-system containers ...
	I0804 00:15:52.749998   64502 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0804 00:15:52.750043   64502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:15:52.793295   64502 cri.go:89] found id: ""
	I0804 00:15:52.793388   64502 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:15:52.811438   64502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:15:52.824055   64502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:15:52.824080   64502 kubeadm.go:157] found existing configuration files:
	
	I0804 00:15:52.824128   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:15:52.835393   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:15:52.835446   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:15:52.846732   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:15:52.856889   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:15:52.856942   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:15:52.869951   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:15:52.881836   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:15:52.881909   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:15:52.894121   64502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:15:52.905643   64502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:15:52.905711   64502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:15:52.917063   64502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:15:52.929399   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:53.132145   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.096969   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.325640   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.385886   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:15:54.472926   64502 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:15:54.473002   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.973406   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.473410   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.578082   64502 api_server.go:72] duration metric: took 1.105154357s to wait for apiserver process to appear ...
	I0804 00:15:55.578170   64502 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:15:55.578207   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:55.578847   64502 api_server.go:269] stopped: https://192.168.50.140:8443/healthz: Get "https://192.168.50.140:8443/healthz": dial tcp 192.168.50.140:8443: connect: connection refused
	I0804 00:15:51.996447   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.496265   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:52.996030   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.496508   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:53.996313   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.495823   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:54.996360   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.496652   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:55.996049   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:55.996141   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:56.045001   64758 cri.go:89] found id: ""
	I0804 00:15:56.045031   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.045042   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:56.045049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:56.045114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:56.086505   64758 cri.go:89] found id: ""
	I0804 00:15:56.086535   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.086547   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:56.086554   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:56.086618   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:56.125953   64758 cri.go:89] found id: ""
	I0804 00:15:56.125983   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.125994   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:56.126001   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:56.126060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:56.167313   64758 cri.go:89] found id: ""
	I0804 00:15:56.167343   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.167354   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:56.167361   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:56.167424   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:56.211102   64758 cri.go:89] found id: ""
	I0804 00:15:56.211132   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.211142   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:56.211149   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:56.211231   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:56.246894   64758 cri.go:89] found id: ""
	I0804 00:15:56.246926   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.246937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:56.246945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:56.247012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:56.281952   64758 cri.go:89] found id: ""
	I0804 00:15:56.281980   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.281991   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:56.281998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:56.282060   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:56.317685   64758 cri.go:89] found id: ""
	I0804 00:15:56.317719   64758 logs.go:276] 0 containers: []
	W0804 00:15:56.317733   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:56.317745   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:56.317762   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:56.335040   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:56.335069   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:56.475995   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:56.476017   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:56.476033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:56.567508   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:56.567551   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:56.618136   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:56.618166   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:54.112928   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:56.112987   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:54.179330   65441 pod_ready.go:102] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:54.789712   65441 pod_ready.go:92] pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.789738   65441 pod_ready.go:81] duration metric: took 4.788487591s for pod "etcd-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.789749   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.799762   65441 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.799785   65441 pod_ready.go:81] duration metric: took 10.029856ms for pod "kube-apiserver-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.799795   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.805685   65441 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.805708   65441 pod_ready.go:81] duration metric: took 5.905108ms for pod "kube-controller-manager-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.805718   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.809797   65441 pod_ready.go:92] pod "kube-proxy-zz7fr" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.809818   65441 pod_ready.go:81] duration metric: took 4.094183ms for pod "kube-proxy-zz7fr" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.809827   65441 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.820536   65441 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace has status "Ready":"True"
	I0804 00:15:54.820557   65441 pod_ready.go:81] duration metric: took 10.722903ms for pod "kube-scheduler-default-k8s-diff-port-969068" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:54.820567   65441 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	I0804 00:15:56.827543   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:15:56.078916   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:58.738609   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:58.738641   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:58.738657   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:58.772665   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:15:58.772695   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:15:59.079121   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:59.083798   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:59.083829   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:59.579242   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:15:59.585343   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:15:59.585381   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:00.078877   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:00.099981   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:00.100022   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:00.578505   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:00.582665   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:00.582692   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:15:59.172886   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:15:59.187045   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:15:59.187128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:15:59.225135   64758 cri.go:89] found id: ""
	I0804 00:15:59.225164   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.225173   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:15:59.225179   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:15:59.225255   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:15:59.262538   64758 cri.go:89] found id: ""
	I0804 00:15:59.262566   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.262573   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:15:59.262578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:15:59.262635   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:15:59.301665   64758 cri.go:89] found id: ""
	I0804 00:15:59.301697   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.301708   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:15:59.301715   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:15:59.301778   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:15:59.362742   64758 cri.go:89] found id: ""
	I0804 00:15:59.362766   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.362774   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:15:59.362779   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:15:59.362834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:15:59.404398   64758 cri.go:89] found id: ""
	I0804 00:15:59.404431   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.404509   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:15:59.404525   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:15:59.404594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:15:59.454257   64758 cri.go:89] found id: ""
	I0804 00:15:59.454285   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.454297   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:15:59.454305   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:15:59.454363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:15:59.496790   64758 cri.go:89] found id: ""
	I0804 00:15:59.496818   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.496829   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:15:59.496837   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:15:59.496896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:15:59.537395   64758 cri.go:89] found id: ""
	I0804 00:15:59.537424   64758 logs.go:276] 0 containers: []
	W0804 00:15:59.537431   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:15:59.537439   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:15:59.537453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:15:59.600005   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:15:59.600042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:15:59.617304   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:15:59.617336   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:15:59.692828   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:15:59.692849   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:15:59.692864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:15:59.764000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:15:59.764038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:15:58.611600   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.110986   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.079326   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:01.083661   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:01.083689   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:01.578711   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:01.583011   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:16:01.583040   64502 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:16:02.078606   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:16:02.083234   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0804 00:16:02.090079   64502 api_server.go:141] control plane version: v1.30.3
	I0804 00:16:02.090112   64502 api_server.go:131] duration metric: took 6.511921332s to wait for apiserver health ...
	I0804 00:16:02.090123   64502 cni.go:84] Creating CNI manager for ""
	I0804 00:16:02.090132   64502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:16:02.092169   64502 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:15:58.829268   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:01.327623   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:02.093704   64502 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:16:02.109001   64502 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:16:02.131996   64502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:16:02.145300   64502 system_pods.go:59] 8 kube-system pods found
	I0804 00:16:02.145333   64502 system_pods.go:61] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 00:16:02.145340   64502 system_pods.go:61] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:16:02.145348   64502 system_pods.go:61] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:16:02.145370   64502 system_pods.go:61] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:16:02.145380   64502 system_pods.go:61] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:16:02.145389   64502 system_pods.go:61] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:16:02.145397   64502 system_pods.go:61] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:16:02.145403   64502 system_pods.go:61] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:16:02.145412   64502 system_pods.go:74] duration metric: took 13.393537ms to wait for pod list to return data ...
	I0804 00:16:02.145425   64502 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:16:02.149623   64502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:16:02.149651   64502 node_conditions.go:123] node cpu capacity is 2
	I0804 00:16:02.149661   64502 node_conditions.go:105] duration metric: took 4.231097ms to run NodePressure ...
	I0804 00:16:02.149677   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:16:02.424261   64502 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0804 00:16:02.429537   64502 kubeadm.go:739] kubelet initialised
	I0804 00:16:02.429555   64502 kubeadm.go:740] duration metric: took 5.269005ms waiting for restarted kubelet to initialise ...
	I0804 00:16:02.429563   64502 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:02.435433   64502 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.440580   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.440606   64502 pod_ready.go:81] duration metric: took 5.145511ms for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.440619   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.440628   64502 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.445111   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "etcd-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.445136   64502 pod_ready.go:81] duration metric: took 4.497361ms for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.445148   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "etcd-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.445157   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.450172   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.450200   64502 pod_ready.go:81] duration metric: took 5.032514ms for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.450211   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.450219   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.536314   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.536386   64502 pod_ready.go:81] duration metric: took 86.155481ms for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.536398   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.536409   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:02.935794   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-proxy-wk8zf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.935830   64502 pod_ready.go:81] duration metric: took 399.405535ms for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:02.935842   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-proxy-wk8zf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:02.935861   64502 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:03.335730   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.335760   64502 pod_ready.go:81] duration metric: took 399.889478ms for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:03.335772   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.335780   64502 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:03.735762   64502 pod_ready.go:97] node "embed-certs-877598" hosting pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.735786   64502 pod_ready.go:81] duration metric: took 399.996995ms for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	E0804 00:16:03.735795   64502 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-877598" hosting pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:03.735802   64502 pod_ready.go:38] duration metric: took 1.306222891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:03.735818   64502 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:16:03.748578   64502 ops.go:34] apiserver oom_adj: -16
	I0804 00:16:03.748602   64502 kubeadm.go:597] duration metric: took 11.026274037s to restartPrimaryControlPlane
	I0804 00:16:03.748611   64502 kubeadm.go:394] duration metric: took 11.082760058s to StartCluster
	I0804 00:16:03.748637   64502 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:16:03.748719   64502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:16:03.750554   64502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:16:03.750824   64502 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:16:03.750900   64502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:16:03.750998   64502 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-877598"
	I0804 00:16:03.751041   64502 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-877598"
	W0804 00:16:03.751053   64502 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:16:03.751051   64502 addons.go:69] Setting default-storageclass=true in profile "embed-certs-877598"
	I0804 00:16:03.751072   64502 config.go:182] Loaded profile config "embed-certs-877598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:16:03.751108   64502 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-877598"
	I0804 00:16:03.751063   64502 addons.go:69] Setting metrics-server=true in profile "embed-certs-877598"
	I0804 00:16:03.751181   64502 addons.go:234] Setting addon metrics-server=true in "embed-certs-877598"
	W0804 00:16:03.751196   64502 addons.go:243] addon metrics-server should already be in state true
	I0804 00:16:03.751245   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.751467   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.751503   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.751540   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.751612   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.751088   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.751990   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.752017   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.752817   64502 out.go:177] * Verifying Kubernetes components...
	I0804 00:16:03.754613   64502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:16:03.769684   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I0804 00:16:03.769701   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37925
	I0804 00:16:03.769697   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
	I0804 00:16:03.770197   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770332   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770619   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.770792   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.770808   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.770935   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.770949   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.771125   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.771327   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.771520   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.771545   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.771555   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.771938   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.772138   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.772195   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.772521   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.772565   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.776267   64502 addons.go:234] Setting addon default-storageclass=true in "embed-certs-877598"
	W0804 00:16:03.776292   64502 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:16:03.776327   64502 host.go:66] Checking if "embed-certs-877598" exists ...
	I0804 00:16:03.776695   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.776738   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.789183   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0804 00:16:03.789660   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.789796   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I0804 00:16:03.790184   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.790202   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.790246   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.790608   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.790869   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.790900   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.790985   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.791276   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.791519   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.793005   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.793338   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.795747   64502 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:16:03.795748   64502 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:16:03.796208   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33857
	I0804 00:16:03.796652   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.797194   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.797220   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.797589   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:16:03.797611   64502 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:16:03.797632   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.797640   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.797673   64502 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:16:03.797684   64502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:16:03.797697   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.798266   64502 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:16:03.798311   64502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:16:03.801933   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802083   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802417   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.802445   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802589   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.802766   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.802851   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.802868   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.802936   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.803140   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.803166   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.803310   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.803409   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.803512   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.818073   64502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0804 00:16:03.818647   64502 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:16:03.819107   64502 main.go:141] libmachine: Using API Version  1
	I0804 00:16:03.819130   64502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:16:03.819488   64502 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:16:03.819721   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetState
	I0804 00:16:03.821982   64502 main.go:141] libmachine: (embed-certs-877598) Calling .DriverName
	I0804 00:16:03.822216   64502 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:16:03.822232   64502 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:16:03.822251   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHHostname
	I0804 00:16:03.825593   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.826055   64502 main.go:141] libmachine: (embed-certs-877598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:aa:38", ip: ""} in network mk-embed-certs-877598: {Iface:virbr1 ExpiryTime:2024-08-04 01:15:38 +0000 UTC Type:0 Mac:52:54:00:86:aa:38 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:embed-certs-877598 Clientid:01:52:54:00:86:aa:38}
	I0804 00:16:03.826090   64502 main.go:141] libmachine: (embed-certs-877598) DBG | domain embed-certs-877598 has defined IP address 192.168.50.140 and MAC address 52:54:00:86:aa:38 in network mk-embed-certs-877598
	I0804 00:16:03.826356   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHPort
	I0804 00:16:03.826526   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHKeyPath
	I0804 00:16:03.826667   64502 main.go:141] libmachine: (embed-certs-877598) Calling .GetSSHUsername
	I0804 00:16:03.826829   64502 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/embed-certs-877598/id_rsa Username:docker}
	I0804 00:16:03.955019   64502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:16:03.976453   64502 node_ready.go:35] waiting up to 6m0s for node "embed-certs-877598" to be "Ready" ...
	I0804 00:16:04.051717   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:16:04.074720   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:16:04.074740   64502 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:16:04.099578   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:16:04.099606   64502 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:16:04.118348   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:16:04.163390   64502 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:16:04.163418   64502 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:16:04.227379   64502 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:16:05.143364   64502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091613097s)
	I0804 00:16:05.143418   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143419   64502 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.025041953s)
	I0804 00:16:05.143430   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143439   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143449   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143726   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.143743   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.143755   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143764   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.143862   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.143893   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.143915   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.143935   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.143964   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.144014   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.144033   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.144085   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.144259   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.144305   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.144319   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.150739   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.150761   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.151073   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.151102   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.151130   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.169806   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.169832   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.170103   64502 main.go:141] libmachine: (embed-certs-877598) DBG | Closing plugin on server side
	I0804 00:16:05.170122   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.170148   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.170159   64502 main.go:141] libmachine: Making call to close driver server
	I0804 00:16:05.170171   64502 main.go:141] libmachine: (embed-certs-877598) Calling .Close
	I0804 00:16:05.170461   64502 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:16:05.170546   64502 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:16:05.170563   64502 addons.go:475] Verifying addon metrics-server=true in "embed-certs-877598"
	I0804 00:16:05.172575   64502 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0804 00:16:05.173964   64502 addons.go:510] duration metric: took 1.423065893s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0804 00:16:02.307325   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:02.324168   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:02.324233   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:02.370204   64758 cri.go:89] found id: ""
	I0804 00:16:02.370234   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.370250   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:02.370258   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:02.370325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:02.405586   64758 cri.go:89] found id: ""
	I0804 00:16:02.405616   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.405628   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:02.405636   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:02.405694   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:02.445644   64758 cri.go:89] found id: ""
	I0804 00:16:02.445665   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.445675   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:02.445682   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:02.445739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:02.483659   64758 cri.go:89] found id: ""
	I0804 00:16:02.483686   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.483695   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:02.483701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:02.483751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:02.519903   64758 cri.go:89] found id: ""
	I0804 00:16:02.519929   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.519938   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:02.519944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:02.519991   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:02.557373   64758 cri.go:89] found id: ""
	I0804 00:16:02.557401   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.557410   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:02.557416   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:02.557472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:02.594203   64758 cri.go:89] found id: ""
	I0804 00:16:02.594238   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.594249   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:02.594256   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:02.594316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:02.635487   64758 cri.go:89] found id: ""
	I0804 00:16:02.635512   64758 logs.go:276] 0 containers: []
	W0804 00:16:02.635520   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:02.635529   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:02.635543   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:02.686990   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:02.687035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:02.701784   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:02.701810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:02.778626   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:02.778648   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:02.778662   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:02.856056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:02.856097   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:05.402858   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:05.418825   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:05.418900   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:05.458789   64758 cri.go:89] found id: ""
	I0804 00:16:05.458872   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.458887   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:05.458895   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:05.458967   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:05.498258   64758 cri.go:89] found id: ""
	I0804 00:16:05.498284   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.498295   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:05.498302   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:05.498364   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:05.540892   64758 cri.go:89] found id: ""
	I0804 00:16:05.540919   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.540927   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:05.540933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:05.540992   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:05.578876   64758 cri.go:89] found id: ""
	I0804 00:16:05.578911   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.578919   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:05.578924   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:05.578971   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:05.616248   64758 cri.go:89] found id: ""
	I0804 00:16:05.616272   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.616280   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:05.616285   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:05.616339   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:05.654387   64758 cri.go:89] found id: ""
	I0804 00:16:05.654419   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.654428   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:05.654436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:05.654528   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:05.695579   64758 cri.go:89] found id: ""
	I0804 00:16:05.695613   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.695625   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:05.695669   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:05.695752   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:05.740754   64758 cri.go:89] found id: ""
	I0804 00:16:05.740777   64758 logs.go:276] 0 containers: []
	W0804 00:16:05.740785   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:05.740793   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:05.740805   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:05.792091   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:05.792126   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:05.809130   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:05.809164   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:05.888441   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:05.888465   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:05.888479   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:05.969336   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:05.969390   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:03.111834   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:05.613749   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:03.830570   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:06.328076   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:05.980692   64502 node_ready.go:53] node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:08.480205   64502 node_ready.go:53] node "embed-certs-877598" has status "Ready":"False"
	I0804 00:16:09.480127   64502 node_ready.go:49] node "embed-certs-877598" has status "Ready":"True"
	I0804 00:16:09.480147   64502 node_ready.go:38] duration metric: took 5.503660587s for node "embed-certs-877598" to be "Ready" ...
	I0804 00:16:09.480155   64502 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:16:09.485704   64502 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:09.491316   64502 pod_ready.go:92] pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:09.491340   64502 pod_ready.go:81] duration metric: took 5.611918ms for pod "coredns-7db6d8ff4d-7gbcf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:09.491348   64502 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:08.514981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:08.531117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:08.531188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:08.569167   64758 cri.go:89] found id: ""
	I0804 00:16:08.569199   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.569210   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:08.569218   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:08.569282   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:08.608478   64758 cri.go:89] found id: ""
	I0804 00:16:08.608559   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.608572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:08.608580   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:08.608636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:08.645939   64758 cri.go:89] found id: ""
	I0804 00:16:08.645972   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.645983   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:08.645990   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:08.646050   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:08.685274   64758 cri.go:89] found id: ""
	I0804 00:16:08.685305   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.685316   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:08.685324   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:08.685400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:08.722314   64758 cri.go:89] found id: ""
	I0804 00:16:08.722345   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.722357   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:08.722363   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:08.722427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:08.758577   64758 cri.go:89] found id: ""
	I0804 00:16:08.758606   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.758617   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:08.758624   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:08.758685   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.798734   64758 cri.go:89] found id: ""
	I0804 00:16:08.798761   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.798773   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:08.798781   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:08.798842   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:08.837577   64758 cri.go:89] found id: ""
	I0804 00:16:08.837600   64758 logs.go:276] 0 containers: []
	W0804 00:16:08.837608   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:08.837616   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:08.837627   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:08.894426   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:08.894465   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:08.909851   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:08.909879   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:08.989858   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:08.989878   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:08.989893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:09.081056   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:09.081098   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:11.627914   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:11.641805   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:11.641896   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:11.679002   64758 cri.go:89] found id: ""
	I0804 00:16:11.679028   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.679036   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:11.679042   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:11.679090   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:11.720188   64758 cri.go:89] found id: ""
	I0804 00:16:11.720220   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.720236   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:11.720245   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:11.720307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:11.760085   64758 cri.go:89] found id: ""
	I0804 00:16:11.760118   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.760130   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:11.760138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:11.760198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:11.796220   64758 cri.go:89] found id: ""
	I0804 00:16:11.796249   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.796266   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:11.796274   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:11.796335   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:11.834216   64758 cri.go:89] found id: ""
	I0804 00:16:11.834243   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.834253   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:11.834260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:11.834336   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:11.869205   64758 cri.go:89] found id: ""
	I0804 00:16:11.869230   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.869237   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:11.869243   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:11.869301   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:08.110499   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:10.618011   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:08.827284   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:10.828942   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:11.498264   64502 pod_ready.go:102] pod "etcd-embed-certs-877598" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:12.498916   64502 pod_ready.go:92] pod "etcd-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:12.498949   64502 pod_ready.go:81] duration metric: took 3.007593153s for pod "etcd-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:12.498961   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.562862   64502 pod_ready.go:92] pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.562896   64502 pod_ready.go:81] duration metric: took 2.063926324s for pod "kube-apiserver-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.562910   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.573628   64502 pod_ready.go:92] pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.573655   64502 pod_ready.go:81] duration metric: took 10.735916ms for pod "kube-controller-manager-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.573670   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.583241   64502 pod_ready.go:92] pod "kube-proxy-wk8zf" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.583266   64502 pod_ready.go:81] duration metric: took 9.588875ms for pod "kube-proxy-wk8zf" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.583278   64502 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.593419   64502 pod_ready.go:92] pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace has status "Ready":"True"
	I0804 00:16:14.593445   64502 pod_ready.go:81] duration metric: took 10.158665ms for pod "kube-scheduler-embed-certs-877598" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:14.593457   64502 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	I0804 00:16:11.912091   64758 cri.go:89] found id: ""
	I0804 00:16:11.912120   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.912132   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:11.912145   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:11.912203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:11.949570   64758 cri.go:89] found id: ""
	I0804 00:16:11.949603   64758 logs.go:276] 0 containers: []
	W0804 00:16:11.949614   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:11.949625   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:11.949643   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:12.006542   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:12.006575   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:12.022435   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:12.022474   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:12.101007   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:12.101032   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:12.101057   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:12.183836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:12.183876   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:14.725345   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:14.738389   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:14.738464   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:14.780103   64758 cri.go:89] found id: ""
	I0804 00:16:14.780133   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.780142   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:14.780147   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:14.780197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:14.817811   64758 cri.go:89] found id: ""
	I0804 00:16:14.817847   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.817863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:14.817872   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:14.817946   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:14.854450   64758 cri.go:89] found id: ""
	I0804 00:16:14.854478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.854488   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:14.854495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:14.854561   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:14.891862   64758 cri.go:89] found id: ""
	I0804 00:16:14.891891   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.891900   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:14.891905   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:14.891958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:14.928450   64758 cri.go:89] found id: ""
	I0804 00:16:14.928478   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.928488   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:14.928495   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:14.928554   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:14.965820   64758 cri.go:89] found id: ""
	I0804 00:16:14.965848   64758 logs.go:276] 0 containers: []
	W0804 00:16:14.965860   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:14.965867   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:14.965945   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:15.008725   64758 cri.go:89] found id: ""
	I0804 00:16:15.008874   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.008888   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:15.008897   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:15.008957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:15.044618   64758 cri.go:89] found id: ""
	I0804 00:16:15.044768   64758 logs.go:276] 0 containers: []
	W0804 00:16:15.044792   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:15.044802   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:15.044815   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:15.102786   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:15.102825   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:15.118305   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:15.118347   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:15.196397   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:15.196420   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:15.196435   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:15.277941   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:15.277986   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:13.110969   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:15.112546   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:13.327840   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:15.826447   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:16.600315   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.099064   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:17.819354   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:17.834271   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:17.834332   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:17.870930   64758 cri.go:89] found id: ""
	I0804 00:16:17.870961   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.870973   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:17.870980   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:17.871040   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:17.907980   64758 cri.go:89] found id: ""
	I0804 00:16:17.908007   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.908016   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:17.908021   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:17.908067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:17.943257   64758 cri.go:89] found id: ""
	I0804 00:16:17.943284   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.943295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:17.943301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:17.943363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:17.982297   64758 cri.go:89] found id: ""
	I0804 00:16:17.982328   64758 logs.go:276] 0 containers: []
	W0804 00:16:17.982338   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:17.982345   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:17.982405   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:18.022780   64758 cri.go:89] found id: ""
	I0804 00:16:18.022810   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.022841   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:18.022850   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:18.022913   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:18.061891   64758 cri.go:89] found id: ""
	I0804 00:16:18.061926   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.061937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:18.061945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:18.062012   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:18.100807   64758 cri.go:89] found id: ""
	I0804 00:16:18.100845   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.100855   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:18.100862   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:18.100917   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:18.142011   64758 cri.go:89] found id: ""
	I0804 00:16:18.142044   64758 logs.go:276] 0 containers: []
	W0804 00:16:18.142056   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:18.142066   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:18.142090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:18.195476   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:18.195511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:18.209661   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:18.209690   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:18.282638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:18.282657   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:18.282669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:18.363900   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:18.363938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:20.908753   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:20.922878   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:20.922962   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:20.961013   64758 cri.go:89] found id: ""
	I0804 00:16:20.961041   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.961052   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:20.961058   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:20.961109   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:20.998027   64758 cri.go:89] found id: ""
	I0804 00:16:20.998059   64758 logs.go:276] 0 containers: []
	W0804 00:16:20.998068   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:20.998074   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:20.998121   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:21.035640   64758 cri.go:89] found id: ""
	I0804 00:16:21.035669   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.035680   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:21.035688   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:21.035751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:21.075737   64758 cri.go:89] found id: ""
	I0804 00:16:21.075770   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.075779   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:21.075786   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:21.075846   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:21.120024   64758 cri.go:89] found id: ""
	I0804 00:16:21.120046   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.120054   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:21.120061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:21.120126   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:21.160796   64758 cri.go:89] found id: ""
	I0804 00:16:21.160821   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.160840   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:21.160847   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:21.160907   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:21.195519   64758 cri.go:89] found id: ""
	I0804 00:16:21.195547   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.195558   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:21.195566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:21.195629   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:21.236193   64758 cri.go:89] found id: ""
	I0804 00:16:21.236222   64758 logs.go:276] 0 containers: []
	W0804 00:16:21.236232   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:21.236243   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:21.236258   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:21.295154   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:21.295198   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:21.309540   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:21.309566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:21.389391   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:21.389416   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:21.389433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:21.472771   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:21.472808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:17.611366   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.612092   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:17.827036   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:19.827655   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:21.828026   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:21.101899   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:23.601687   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.018923   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:24.032954   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:24.033013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:24.073677   64758 cri.go:89] found id: ""
	I0804 00:16:24.073703   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.073711   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:24.073716   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:24.073777   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:24.115752   64758 cri.go:89] found id: ""
	I0804 00:16:24.115775   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.115785   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:24.115792   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:24.115849   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:24.152967   64758 cri.go:89] found id: ""
	I0804 00:16:24.153001   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.153017   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:24.153024   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:24.153098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:24.190557   64758 cri.go:89] found id: ""
	I0804 00:16:24.190581   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.190589   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:24.190595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:24.190643   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:24.229312   64758 cri.go:89] found id: ""
	I0804 00:16:24.229341   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.229351   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:24.229373   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:24.229437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:24.265076   64758 cri.go:89] found id: ""
	I0804 00:16:24.265100   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.265107   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:24.265113   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:24.265167   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:24.306508   64758 cri.go:89] found id: ""
	I0804 00:16:24.306534   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.306542   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:24.306547   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:24.306598   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:24.350714   64758 cri.go:89] found id: ""
	I0804 00:16:24.350747   64758 logs.go:276] 0 containers: []
	W0804 00:16:24.350759   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:24.350770   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:24.350785   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:24.366188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:24.366216   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:24.438410   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:24.438431   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:24.438447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:24.522635   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:24.522669   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:24.562647   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:24.562678   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:22.110420   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.111399   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.613839   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:24.327982   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.826914   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:26.099435   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:28.099896   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:30.100659   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:27.119437   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:27.133330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:27.133426   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:27.170001   64758 cri.go:89] found id: ""
	I0804 00:16:27.170039   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.170048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:27.170054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:27.170112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:27.205811   64758 cri.go:89] found id: ""
	I0804 00:16:27.205843   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.205854   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:27.205861   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:27.205922   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:27.247249   64758 cri.go:89] found id: ""
	I0804 00:16:27.247278   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.247287   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:27.247294   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:27.247360   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:27.285659   64758 cri.go:89] found id: ""
	I0804 00:16:27.285688   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.285697   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:27.285703   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:27.285774   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:27.321039   64758 cri.go:89] found id: ""
	I0804 00:16:27.321066   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.321075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:27.321084   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:27.321130   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:27.359947   64758 cri.go:89] found id: ""
	I0804 00:16:27.359977   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.359988   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:27.359996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:27.360056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:27.401408   64758 cri.go:89] found id: ""
	I0804 00:16:27.401432   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.401440   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:27.401449   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:27.401495   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:27.437297   64758 cri.go:89] found id: ""
	I0804 00:16:27.437326   64758 logs.go:276] 0 containers: []
	W0804 00:16:27.437337   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:27.437347   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:27.437373   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:27.490594   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:27.490639   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:27.505993   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:27.506021   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:27.588779   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:27.588804   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:27.588820   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:27.681557   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:27.681592   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.225062   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:30.239475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:30.239540   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:30.283896   64758 cri.go:89] found id: ""
	I0804 00:16:30.283923   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.283931   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:30.283938   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:30.284013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:30.321506   64758 cri.go:89] found id: ""
	I0804 00:16:30.321532   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.321539   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:30.321545   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:30.321593   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:30.358314   64758 cri.go:89] found id: ""
	I0804 00:16:30.358340   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.358347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:30.358353   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:30.358400   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:30.393561   64758 cri.go:89] found id: ""
	I0804 00:16:30.393587   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.393595   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:30.393600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:30.393646   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:30.429907   64758 cri.go:89] found id: ""
	I0804 00:16:30.429935   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.429943   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:30.429949   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:30.430008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:30.466305   64758 cri.go:89] found id: ""
	I0804 00:16:30.466332   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.466342   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:30.466350   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:30.466408   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:30.505384   64758 cri.go:89] found id: ""
	I0804 00:16:30.505413   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.505424   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:30.505431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:30.505492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:30.541756   64758 cri.go:89] found id: ""
	I0804 00:16:30.541786   64758 logs.go:276] 0 containers: []
	W0804 00:16:30.541796   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:30.541806   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:30.541821   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:30.555516   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:30.555554   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:30.627442   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:30.627463   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:30.627473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:30.701452   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:30.701489   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:30.743436   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:30.743473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:29.111149   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:31.111470   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:29.327268   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:31.328424   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:32.605884   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:34.608119   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:33.298898   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:33.315211   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:33.315292   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:33.353171   64758 cri.go:89] found id: ""
	I0804 00:16:33.353207   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.353220   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:33.353229   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:33.353297   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:33.389767   64758 cri.go:89] found id: ""
	I0804 00:16:33.389792   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.389799   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:33.389805   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:33.389851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:33.446889   64758 cri.go:89] found id: ""
	I0804 00:16:33.446928   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.446939   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:33.446946   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:33.447004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:33.487340   64758 cri.go:89] found id: ""
	I0804 00:16:33.487362   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.487370   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:33.487376   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:33.487423   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:33.530398   64758 cri.go:89] found id: ""
	I0804 00:16:33.530421   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.530429   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:33.530435   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:33.530483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:33.568725   64758 cri.go:89] found id: ""
	I0804 00:16:33.568753   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.568762   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:33.568769   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:33.568818   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:33.607205   64758 cri.go:89] found id: ""
	I0804 00:16:33.607232   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.607242   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:33.607249   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:33.607311   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:33.648188   64758 cri.go:89] found id: ""
	I0804 00:16:33.648220   64758 logs.go:276] 0 containers: []
	W0804 00:16:33.648230   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:33.648240   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:33.648256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:33.700231   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:33.700266   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:33.714899   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:33.714932   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:33.794306   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:33.794326   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:33.794340   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.872446   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:33.872482   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.415000   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:36.428920   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:36.428996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:36.464784   64758 cri.go:89] found id: ""
	I0804 00:16:36.464810   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.464817   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:36.464823   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:36.464925   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:36.501394   64758 cri.go:89] found id: ""
	I0804 00:16:36.501423   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.501431   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:36.501437   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:36.501497   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:36.537049   64758 cri.go:89] found id: ""
	I0804 00:16:36.537079   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.537090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:36.537102   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:36.537173   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:36.573956   64758 cri.go:89] found id: ""
	I0804 00:16:36.573986   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.573997   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:36.574004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:36.574065   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:36.612996   64758 cri.go:89] found id: ""
	I0804 00:16:36.613016   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.613023   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:36.613029   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:36.613083   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:36.652346   64758 cri.go:89] found id: ""
	I0804 00:16:36.652367   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.652374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:36.652380   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:36.652437   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:36.690073   64758 cri.go:89] found id: ""
	I0804 00:16:36.690100   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.690110   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:36.690119   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:36.690182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:36.732436   64758 cri.go:89] found id: ""
	I0804 00:16:36.732466   64758 logs.go:276] 0 containers: []
	W0804 00:16:36.732477   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:36.732487   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:36.732505   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:36.746036   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:36.746060   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:36.818141   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:36.818164   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:36.818179   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:33.611181   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:35.611691   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:33.329719   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:35.330172   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:37.100705   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:39.603600   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:36.907689   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:36.907732   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:36.947104   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:36.947135   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.502960   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:39.516340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:39.516414   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:39.555903   64758 cri.go:89] found id: ""
	I0804 00:16:39.555929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.555939   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:39.555946   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:39.556004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:39.599791   64758 cri.go:89] found id: ""
	I0804 00:16:39.599816   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.599827   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:39.599834   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:39.599894   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:39.642903   64758 cri.go:89] found id: ""
	I0804 00:16:39.642929   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.642936   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:39.642944   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:39.643004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:39.678667   64758 cri.go:89] found id: ""
	I0804 00:16:39.678693   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.678702   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:39.678709   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:39.678757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:39.716888   64758 cri.go:89] found id: ""
	I0804 00:16:39.716916   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.716926   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:39.716933   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:39.717001   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:39.751576   64758 cri.go:89] found id: ""
	I0804 00:16:39.751602   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.751610   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:39.751616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:39.751664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:39.794026   64758 cri.go:89] found id: ""
	I0804 00:16:39.794056   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.794067   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:39.794087   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:39.794158   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:39.841426   64758 cri.go:89] found id: ""
	I0804 00:16:39.841454   64758 logs.go:276] 0 containers: []
	W0804 00:16:39.841464   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:39.841474   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:39.841492   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:39.902579   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:39.902616   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:39.924467   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:39.924495   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:40.001318   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:40.001345   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:40.001377   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:40.081520   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:40.081552   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:38.111443   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:40.610810   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:37.827851   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:39.828752   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.327716   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.100037   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:44.100850   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:42.623094   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:42.636523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:42.636594   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:42.674188   64758 cri.go:89] found id: ""
	I0804 00:16:42.674218   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.674226   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:42.674231   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:42.674277   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:42.708496   64758 cri.go:89] found id: ""
	I0804 00:16:42.708522   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.708532   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:42.708539   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:42.708601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:42.751050   64758 cri.go:89] found id: ""
	I0804 00:16:42.751087   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.751100   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:42.751107   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:42.751170   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:42.788520   64758 cri.go:89] found id: ""
	I0804 00:16:42.788546   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.788555   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:42.788560   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:42.788619   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:42.828273   64758 cri.go:89] found id: ""
	I0804 00:16:42.828297   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.828304   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:42.828309   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:42.828356   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:42.867754   64758 cri.go:89] found id: ""
	I0804 00:16:42.867784   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.867799   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:42.867807   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:42.867864   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:42.903945   64758 cri.go:89] found id: ""
	I0804 00:16:42.903977   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.903988   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:42.903996   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:42.904059   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:42.942477   64758 cri.go:89] found id: ""
	I0804 00:16:42.942518   64758 logs.go:276] 0 containers: []
	W0804 00:16:42.942539   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:42.942549   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:42.942565   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:42.981776   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:42.981810   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:43.037601   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:43.037634   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:43.052719   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:43.052746   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:43.122664   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:43.122688   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:43.122702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:45.701275   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:45.714532   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:45.714607   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:45.750932   64758 cri.go:89] found id: ""
	I0804 00:16:45.750955   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.750986   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:45.750991   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:45.751042   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:45.787348   64758 cri.go:89] found id: ""
	I0804 00:16:45.787373   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.787381   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:45.787387   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:45.787441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:45.823390   64758 cri.go:89] found id: ""
	I0804 00:16:45.823419   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.823429   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:45.823436   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:45.823498   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:45.861400   64758 cri.go:89] found id: ""
	I0804 00:16:45.861430   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.861440   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:45.861448   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:45.861508   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:45.898992   64758 cri.go:89] found id: ""
	I0804 00:16:45.899024   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.899036   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:45.899043   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:45.899110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:45.934542   64758 cri.go:89] found id: ""
	I0804 00:16:45.934570   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.934582   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:45.934589   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:45.934648   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:45.967908   64758 cri.go:89] found id: ""
	I0804 00:16:45.967938   64758 logs.go:276] 0 containers: []
	W0804 00:16:45.967949   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:45.967957   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:45.968018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:46.006475   64758 cri.go:89] found id: ""
	I0804 00:16:46.006504   64758 logs.go:276] 0 containers: []
	W0804 00:16:46.006516   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:46.006526   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:46.006541   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:46.058760   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:46.058793   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:46.074753   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:46.074777   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:46.149634   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:46.149655   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:46.149671   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:46.230104   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:46.230140   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:43.111492   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:45.611224   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:44.827683   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:47.326999   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:46.600307   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:49.100532   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:48.772224   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:48.785848   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:48.785935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.825206   64758 cri.go:89] found id: ""
	I0804 00:16:48.825232   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.825242   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:48.825249   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:48.825315   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:48.861559   64758 cri.go:89] found id: ""
	I0804 00:16:48.861588   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.861599   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:48.861607   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:48.861675   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:48.903375   64758 cri.go:89] found id: ""
	I0804 00:16:48.903401   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.903412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:48.903419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:48.903480   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:48.940708   64758 cri.go:89] found id: ""
	I0804 00:16:48.940736   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.940748   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:48.940755   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:48.940817   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:48.976190   64758 cri.go:89] found id: ""
	I0804 00:16:48.976218   64758 logs.go:276] 0 containers: []
	W0804 00:16:48.976228   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:48.976236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:48.976291   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:49.010393   64758 cri.go:89] found id: ""
	I0804 00:16:49.010423   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.010434   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:49.010442   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:49.010506   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:49.046670   64758 cri.go:89] found id: ""
	I0804 00:16:49.046698   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.046707   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:49.046711   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:49.046759   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:49.085254   64758 cri.go:89] found id: ""
	I0804 00:16:49.085284   64758 logs.go:276] 0 containers: []
	W0804 00:16:49.085293   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:49.085302   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:49.085314   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:49.142402   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:49.142433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:49.157063   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:49.157092   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:49.233808   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:49.233829   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:49.233841   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:49.320355   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:49.320395   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:51.862548   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:51.875679   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:51.875750   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:48.110954   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:50.111867   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:49.327109   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.327920   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.600258   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:53.601052   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:51.911400   64758 cri.go:89] found id: ""
	I0804 00:16:51.911427   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.911437   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:51.911444   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:51.911505   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:51.948825   64758 cri.go:89] found id: ""
	I0804 00:16:51.948853   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.948863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:51.948870   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:51.948935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:51.989458   64758 cri.go:89] found id: ""
	I0804 00:16:51.989488   64758 logs.go:276] 0 containers: []
	W0804 00:16:51.989499   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:51.989506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:51.989568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:52.026663   64758 cri.go:89] found id: ""
	I0804 00:16:52.026685   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.026693   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:52.026698   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:52.026754   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:52.066089   64758 cri.go:89] found id: ""
	I0804 00:16:52.066115   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.066127   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:52.066135   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:52.066198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:52.102159   64758 cri.go:89] found id: ""
	I0804 00:16:52.102185   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.102196   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:52.102203   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:52.102258   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:52.144239   64758 cri.go:89] found id: ""
	I0804 00:16:52.144266   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.144276   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:52.144283   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:52.144344   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:52.180679   64758 cri.go:89] found id: ""
	I0804 00:16:52.180708   64758 logs.go:276] 0 containers: []
	W0804 00:16:52.180717   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:52.180725   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:52.180738   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:52.262074   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:52.262116   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.305913   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:52.305948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:52.357044   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:52.357081   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:52.372090   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:52.372119   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:52.444148   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:54.944910   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:54.958182   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:54.958239   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:54.993629   64758 cri.go:89] found id: ""
	I0804 00:16:54.993657   64758 logs.go:276] 0 containers: []
	W0804 00:16:54.993668   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:54.993675   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:54.993734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:55.029270   64758 cri.go:89] found id: ""
	I0804 00:16:55.029299   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.029310   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:55.029317   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:55.029393   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:55.067923   64758 cri.go:89] found id: ""
	I0804 00:16:55.067951   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.067961   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:55.067968   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:55.068027   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:55.107533   64758 cri.go:89] found id: ""
	I0804 00:16:55.107556   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.107565   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:55.107572   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:55.107633   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:55.143828   64758 cri.go:89] found id: ""
	I0804 00:16:55.143856   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.143868   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:55.143875   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:55.143940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:55.177960   64758 cri.go:89] found id: ""
	I0804 00:16:55.178015   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.178030   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:55.178038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:55.178112   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:55.217457   64758 cri.go:89] found id: ""
	I0804 00:16:55.217481   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.217488   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:55.217494   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:55.217538   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:55.259862   64758 cri.go:89] found id: ""
	I0804 00:16:55.259890   64758 logs.go:276] 0 containers: []
	W0804 00:16:55.259898   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:55.259907   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:55.259918   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:16:55.311566   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:55.311598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:55.327833   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:55.327866   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:55.406475   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:55.406495   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:55.406511   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:55.484586   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:55.484618   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:52.610982   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:54.611276   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:56.611515   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:53.827394   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:55.827945   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:56.099238   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.100223   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:00.599870   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.028251   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:16:58.042169   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:16:58.042236   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:16:58.076836   64758 cri.go:89] found id: ""
	I0804 00:16:58.076859   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.076868   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:16:58.076873   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:16:58.076937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:16:58.115989   64758 cri.go:89] found id: ""
	I0804 00:16:58.116019   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.116031   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:16:58.116037   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:16:58.116099   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:16:58.155049   64758 cri.go:89] found id: ""
	I0804 00:16:58.155079   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.155090   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:16:58.155097   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:16:58.155160   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:16:58.190257   64758 cri.go:89] found id: ""
	I0804 00:16:58.190293   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.190305   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:16:58.190315   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:16:58.190370   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:16:58.225001   64758 cri.go:89] found id: ""
	I0804 00:16:58.225029   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.225038   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:16:58.225061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:16:58.225118   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:16:58.268881   64758 cri.go:89] found id: ""
	I0804 00:16:58.268925   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.268937   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:16:58.268945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:16:58.269010   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:16:58.305223   64758 cri.go:89] found id: ""
	I0804 00:16:58.305253   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.305269   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:16:58.305277   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:16:58.305340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:16:58.340517   64758 cri.go:89] found id: ""
	I0804 00:16:58.340548   64758 logs.go:276] 0 containers: []
	W0804 00:16:58.340559   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:16:58.340570   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:16:58.340584   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:16:58.355372   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:16:58.355403   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:16:58.426292   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:16:58.426312   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:16:58.426326   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:16:58.509990   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:16:58.510034   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.550957   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:16:58.550988   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.104806   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:01.119379   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:01.119453   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:01.158376   64758 cri.go:89] found id: ""
	I0804 00:17:01.158407   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.158419   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:01.158426   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:01.158484   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:01.193826   64758 cri.go:89] found id: ""
	I0804 00:17:01.193858   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.193869   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:01.193876   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:01.193937   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:01.228566   64758 cri.go:89] found id: ""
	I0804 00:17:01.228588   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.228600   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:01.228607   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:01.228667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:01.265736   64758 cri.go:89] found id: ""
	I0804 00:17:01.265762   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.265772   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:01.265778   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:01.265834   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:01.302655   64758 cri.go:89] found id: ""
	I0804 00:17:01.302679   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.302694   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:01.302699   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:01.302753   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:01.340191   64758 cri.go:89] found id: ""
	I0804 00:17:01.340218   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.340226   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:01.340236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:01.340294   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:01.375767   64758 cri.go:89] found id: ""
	I0804 00:17:01.375789   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.375797   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:01.375802   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:01.375875   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:01.412446   64758 cri.go:89] found id: ""
	I0804 00:17:01.412479   64758 logs.go:276] 0 containers: []
	W0804 00:17:01.412490   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:01.412502   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:01.412518   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:01.466271   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:01.466309   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:01.480800   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:01.480838   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:01.547909   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:01.547932   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:01.547948   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:01.628318   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:01.628351   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:16:58.611854   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:01.111626   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:16:58.326831   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:00.327154   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:02.328038   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:02.601960   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:05.099489   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:04.175883   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:04.189038   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:04.189098   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:04.229126   64758 cri.go:89] found id: ""
	I0804 00:17:04.229158   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.229167   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:04.229174   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:04.229235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:04.264107   64758 cri.go:89] found id: ""
	I0804 00:17:04.264134   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.264142   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:04.264147   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:04.264203   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:04.299959   64758 cri.go:89] found id: ""
	I0804 00:17:04.299996   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.300004   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:04.300010   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:04.300056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:04.337978   64758 cri.go:89] found id: ""
	I0804 00:17:04.338006   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.338016   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:04.338023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:04.338081   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:04.377969   64758 cri.go:89] found id: ""
	I0804 00:17:04.377993   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.378001   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:04.378006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:04.378068   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:04.413036   64758 cri.go:89] found id: ""
	I0804 00:17:04.413062   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.413071   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:04.413078   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:04.413140   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:04.450387   64758 cri.go:89] found id: ""
	I0804 00:17:04.450417   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.450426   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:04.450431   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:04.450488   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:04.490132   64758 cri.go:89] found id: ""
	I0804 00:17:04.490165   64758 logs.go:276] 0 containers: []
	W0804 00:17:04.490177   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:04.490188   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:04.490204   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:04.560633   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:04.560653   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:04.560668   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:04.639409   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:04.639445   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:04.682479   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:04.682512   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:04.734823   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:04.734857   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:03.112357   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:05.610907   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:04.828050   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.327249   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.099893   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:09.100093   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:07.250174   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:07.263523   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:07.263599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:07.300095   64758 cri.go:89] found id: ""
	I0804 00:17:07.300124   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.300136   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:07.300144   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:07.300211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:07.337798   64758 cri.go:89] found id: ""
	I0804 00:17:07.337824   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.337846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:07.337851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:07.337902   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:07.375305   64758 cri.go:89] found id: ""
	I0804 00:17:07.375337   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.375348   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:07.375356   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:07.375406   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:07.411603   64758 cri.go:89] found id: ""
	I0804 00:17:07.411629   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.411639   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:07.411646   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:07.411704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:07.450478   64758 cri.go:89] found id: ""
	I0804 00:17:07.450502   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.450511   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:07.450518   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:07.450564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:07.489972   64758 cri.go:89] found id: ""
	I0804 00:17:07.489997   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.490006   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:07.490012   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:07.490073   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:07.523685   64758 cri.go:89] found id: ""
	I0804 00:17:07.523713   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.523725   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:07.523732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:07.523789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:07.562636   64758 cri.go:89] found id: ""
	I0804 00:17:07.562665   64758 logs.go:276] 0 containers: []
	W0804 00:17:07.562675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:07.562686   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:07.562702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:07.647968   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:07.648004   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.689829   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:07.689856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:07.738333   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:07.738366   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:07.753419   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:07.753448   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:07.829678   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.329981   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:10.343676   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:10.343743   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:10.379546   64758 cri.go:89] found id: ""
	I0804 00:17:10.379575   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.379586   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:10.379594   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:10.379657   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:10.416247   64758 cri.go:89] found id: ""
	I0804 00:17:10.416271   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.416279   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:10.416284   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:10.416340   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:10.455261   64758 cri.go:89] found id: ""
	I0804 00:17:10.455291   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.455303   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:10.455310   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:10.455373   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:10.493220   64758 cri.go:89] found id: ""
	I0804 00:17:10.493251   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.493262   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:10.493270   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:10.493329   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:10.538682   64758 cri.go:89] found id: ""
	I0804 00:17:10.538709   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.538720   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:10.538727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:10.538787   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:10.575509   64758 cri.go:89] found id: ""
	I0804 00:17:10.575535   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.575546   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:10.575553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:10.575609   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:10.613163   64758 cri.go:89] found id: ""
	I0804 00:17:10.613188   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.613196   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:10.613201   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:10.613260   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:10.648914   64758 cri.go:89] found id: ""
	I0804 00:17:10.648940   64758 logs.go:276] 0 containers: []
	W0804 00:17:10.648947   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:10.648956   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:10.648968   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:10.700151   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:10.700187   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:10.714971   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:10.714998   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:10.787679   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:10.787698   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:10.787710   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:10.865008   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:10.865048   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:07.611770   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:10.110299   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:09.327569   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:11.327855   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:11.603427   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:14.100524   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:13.406150   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:13.419602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:13.419659   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:13.456823   64758 cri.go:89] found id: ""
	I0804 00:17:13.456852   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.456863   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:13.456870   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:13.456935   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:13.493527   64758 cri.go:89] found id: ""
	I0804 00:17:13.493556   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.493567   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:13.493574   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:13.493697   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:13.529745   64758 cri.go:89] found id: ""
	I0804 00:17:13.529770   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.529784   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:13.529790   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:13.529856   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:13.567775   64758 cri.go:89] found id: ""
	I0804 00:17:13.567811   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.567819   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:13.567824   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:13.567888   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:13.604638   64758 cri.go:89] found id: ""
	I0804 00:17:13.604670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.604678   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:13.604685   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:13.604741   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:13.646638   64758 cri.go:89] found id: ""
	I0804 00:17:13.646670   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.646679   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:13.646684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:13.646730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:13.694656   64758 cri.go:89] found id: ""
	I0804 00:17:13.694682   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.694693   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:13.694701   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:13.694761   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:13.733738   64758 cri.go:89] found id: ""
	I0804 00:17:13.733762   64758 logs.go:276] 0 containers: []
	W0804 00:17:13.733771   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:13.733780   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:13.733792   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:13.749747   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:13.749775   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:13.832826   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:13.832852   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:13.832868   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:13.914198   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:13.914233   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:13.952753   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:13.952787   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.503600   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:16.516932   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:16.517004   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:16.552012   64758 cri.go:89] found id: ""
	I0804 00:17:16.552037   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.552046   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:16.552052   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:16.552110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:16.590626   64758 cri.go:89] found id: ""
	I0804 00:17:16.590653   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.590660   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:16.590666   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:16.590732   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:16.628684   64758 cri.go:89] found id: ""
	I0804 00:17:16.628712   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.628723   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:16.628729   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:16.628792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:16.664934   64758 cri.go:89] found id: ""
	I0804 00:17:16.664969   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.664980   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:16.664987   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:16.665054   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:16.700098   64758 cri.go:89] found id: ""
	I0804 00:17:16.700127   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.700138   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:16.700144   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:16.700214   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:16.736761   64758 cri.go:89] found id: ""
	I0804 00:17:16.736786   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.736795   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:16.736800   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:16.736863   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:16.780010   64758 cri.go:89] found id: ""
	I0804 00:17:16.780033   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.780045   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:16.780050   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:16.780106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:16.816079   64758 cri.go:89] found id: ""
	I0804 00:17:16.816103   64758 logs.go:276] 0 containers: []
	W0804 00:17:16.816112   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:16.816122   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:16.816136   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:16.866526   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:16.866560   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:16.881254   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:16.881287   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:17:12.610907   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:14.610978   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.611860   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:13.827860   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.327167   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:16.601482   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:19.100152   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	W0804 00:17:16.952491   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:16.952515   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:16.952530   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:17.038943   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:17.038977   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:19.580078   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:19.595538   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:19.595601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:19.632206   64758 cri.go:89] found id: ""
	I0804 00:17:19.632234   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.632245   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:19.632252   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:19.632307   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:19.670335   64758 cri.go:89] found id: ""
	I0804 00:17:19.670362   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.670377   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:19.670388   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:19.670447   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:19.707772   64758 cri.go:89] found id: ""
	I0804 00:17:19.707801   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.707812   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:19.707818   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:19.707877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:19.743822   64758 cri.go:89] found id: ""
	I0804 00:17:19.743855   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.743867   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:19.743874   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:19.743930   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:19.781592   64758 cri.go:89] found id: ""
	I0804 00:17:19.781622   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.781632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:19.781640   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:19.781698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:19.818792   64758 cri.go:89] found id: ""
	I0804 00:17:19.818815   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.818823   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:19.818829   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:19.818877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:19.856486   64758 cri.go:89] found id: ""
	I0804 00:17:19.856511   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.856522   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:19.856528   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:19.856586   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:19.901721   64758 cri.go:89] found id: ""
	I0804 00:17:19.901743   64758 logs.go:276] 0 containers: []
	W0804 00:17:19.901754   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:19.901764   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:19.901780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:19.980095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:19.980119   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:19.980134   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:20.072699   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:20.072750   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:20.159007   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:20.159038   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:20.211785   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:20.211818   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:19.110218   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:21.110572   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:18.828527   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:20.828554   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:21.600968   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:23.602526   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.603220   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:22.727235   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:22.740922   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:22.740996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:22.780356   64758 cri.go:89] found id: ""
	I0804 00:17:22.780381   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.780392   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:22.780400   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:22.780459   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:22.817075   64758 cri.go:89] found id: ""
	I0804 00:17:22.817100   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.817111   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:22.817119   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:22.817182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:22.857213   64758 cri.go:89] found id: ""
	I0804 00:17:22.857243   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.857253   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:22.857260   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:22.857325   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:22.894049   64758 cri.go:89] found id: ""
	I0804 00:17:22.894085   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.894096   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:22.894104   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:22.894171   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:22.929718   64758 cri.go:89] found id: ""
	I0804 00:17:22.929746   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.929756   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:22.929770   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:22.929843   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:22.964863   64758 cri.go:89] found id: ""
	I0804 00:17:22.964892   64758 logs.go:276] 0 containers: []
	W0804 00:17:22.964901   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:22.964907   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:22.964958   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:23.002565   64758 cri.go:89] found id: ""
	I0804 00:17:23.002593   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.002603   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:23.002611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:23.002676   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:23.038161   64758 cri.go:89] found id: ""
	I0804 00:17:23.038188   64758 logs.go:276] 0 containers: []
	W0804 00:17:23.038199   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:23.038211   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:23.038224   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:23.091865   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:23.091903   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:23.108358   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:23.108388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:23.186417   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:23.186438   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:23.186453   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.269119   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:23.269161   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:25.812405   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:25.833174   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:25.833253   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:25.881654   64758 cri.go:89] found id: ""
	I0804 00:17:25.881681   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.881690   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:25.881696   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:25.881757   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:25.936968   64758 cri.go:89] found id: ""
	I0804 00:17:25.936997   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.937006   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:25.937011   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:25.937066   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:25.972437   64758 cri.go:89] found id: ""
	I0804 00:17:25.972462   64758 logs.go:276] 0 containers: []
	W0804 00:17:25.972470   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:25.972475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:25.972529   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:26.008306   64758 cri.go:89] found id: ""
	I0804 00:17:26.008346   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.008357   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:26.008366   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:26.008435   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:26.045593   64758 cri.go:89] found id: ""
	I0804 00:17:26.045620   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.045632   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:26.045639   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:26.045696   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:26.084170   64758 cri.go:89] found id: ""
	I0804 00:17:26.084195   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.084205   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:26.084212   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:26.084272   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:26.122524   64758 cri.go:89] found id: ""
	I0804 00:17:26.122551   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.122559   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:26.122565   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:26.122623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:26.159264   64758 cri.go:89] found id: ""
	I0804 00:17:26.159297   64758 logs.go:276] 0 containers: []
	W0804 00:17:26.159308   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:26.159320   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:26.159337   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:26.205692   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:26.205718   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:26.257286   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:26.257321   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:26.271582   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:26.271611   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:26.344562   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:26.344586   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:26.344598   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:23.112800   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.610507   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:23.327294   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:25.828519   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:28.100160   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:30.100351   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:28.929410   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:28.943941   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:28.944003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:28.986127   64758 cri.go:89] found id: ""
	I0804 00:17:28.986157   64758 logs.go:276] 0 containers: []
	W0804 00:17:28.986169   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:28.986176   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:28.986237   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:29.025528   64758 cri.go:89] found id: ""
	I0804 00:17:29.025556   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.025564   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:29.025570   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:29.025624   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:29.059525   64758 cri.go:89] found id: ""
	I0804 00:17:29.059553   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.059561   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:29.059566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:29.059614   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:29.097451   64758 cri.go:89] found id: ""
	I0804 00:17:29.097489   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.097499   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:29.097506   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:29.097564   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:29.135504   64758 cri.go:89] found id: ""
	I0804 00:17:29.135532   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.135540   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:29.135546   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:29.135601   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:29.175277   64758 cri.go:89] found id: ""
	I0804 00:17:29.175314   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.175324   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:29.175332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:29.175391   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:29.210275   64758 cri.go:89] found id: ""
	I0804 00:17:29.210303   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.210314   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:29.210321   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:29.210382   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:29.246138   64758 cri.go:89] found id: ""
	I0804 00:17:29.246174   64758 logs.go:276] 0 containers: []
	W0804 00:17:29.246186   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:29.246196   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:29.246213   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:29.298935   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:29.298971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:29.313342   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:29.313388   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:29.384609   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:29.384635   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:29.384650   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:29.461759   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:29.461795   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:27.611021   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:29.612149   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:27.831367   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:30.327878   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.328772   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.101073   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.600832   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:32.010152   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:32.023609   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:32.023677   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:32.062480   64758 cri.go:89] found id: ""
	I0804 00:17:32.062508   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.062517   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:32.062523   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:32.062590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:32.099601   64758 cri.go:89] found id: ""
	I0804 00:17:32.099627   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.099634   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:32.099640   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:32.099691   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:32.138651   64758 cri.go:89] found id: ""
	I0804 00:17:32.138680   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.138689   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:32.138694   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:32.138751   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:32.182224   64758 cri.go:89] found id: ""
	I0804 00:17:32.182249   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.182257   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:32.182264   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:32.182318   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:32.224381   64758 cri.go:89] found id: ""
	I0804 00:17:32.224410   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.224421   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:32.224429   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:32.224486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:32.261569   64758 cri.go:89] found id: ""
	I0804 00:17:32.261600   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.261609   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:32.261615   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:32.261663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:32.304769   64758 cri.go:89] found id: ""
	I0804 00:17:32.304793   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.304807   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:32.304814   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:32.304867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:32.348695   64758 cri.go:89] found id: ""
	I0804 00:17:32.348727   64758 logs.go:276] 0 containers: []
	W0804 00:17:32.348736   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:32.348745   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:32.348757   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.389444   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:32.389473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:32.442901   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:32.442938   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:32.457562   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:32.457588   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:32.529121   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:32.529144   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:32.529160   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.114712   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:35.129725   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:35.129795   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:35.167226   64758 cri.go:89] found id: ""
	I0804 00:17:35.167248   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.167257   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:35.167262   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:35.167310   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:35.200889   64758 cri.go:89] found id: ""
	I0804 00:17:35.200914   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.200922   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:35.200927   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:35.201000   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:35.234899   64758 cri.go:89] found id: ""
	I0804 00:17:35.234927   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.234938   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:35.234945   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:35.235003   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:35.271355   64758 cri.go:89] found id: ""
	I0804 00:17:35.271393   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.271405   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:35.271412   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:35.271471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:35.313557   64758 cri.go:89] found id: ""
	I0804 00:17:35.313585   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.313595   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:35.313602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:35.313663   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:35.352931   64758 cri.go:89] found id: ""
	I0804 00:17:35.352960   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.352971   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:35.352979   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:35.353046   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:35.391202   64758 cri.go:89] found id: ""
	I0804 00:17:35.391232   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.391256   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:35.391263   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:35.391337   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:35.427599   64758 cri.go:89] found id: ""
	I0804 00:17:35.427627   64758 logs.go:276] 0 containers: []
	W0804 00:17:35.427638   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:35.427649   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:35.427666   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:35.482025   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:35.482061   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:35.498274   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:35.498303   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:35.572606   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:35.572631   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:35.572644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:35.655534   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:35.655566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:32.114835   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.610785   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:34.827077   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:36.827108   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:36.601588   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:38.602210   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:40.602295   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:38.205756   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:38.218974   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:38.219044   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:38.253798   64758 cri.go:89] found id: ""
	I0804 00:17:38.253827   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.253839   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:38.253852   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:38.253911   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:38.291074   64758 cri.go:89] found id: ""
	I0804 00:17:38.291102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.291113   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:38.291120   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:38.291182   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:38.332097   64758 cri.go:89] found id: ""
	I0804 00:17:38.332123   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.332133   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:38.332140   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:38.332198   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:38.370074   64758 cri.go:89] found id: ""
	I0804 00:17:38.370102   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.370110   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:38.370117   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:38.370176   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:38.406962   64758 cri.go:89] found id: ""
	I0804 00:17:38.406984   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.406993   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:38.406998   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:38.407051   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:38.447532   64758 cri.go:89] found id: ""
	I0804 00:17:38.447562   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.447572   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:38.447579   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:38.447653   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:38.484326   64758 cri.go:89] found id: ""
	I0804 00:17:38.484356   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.484368   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:38.484375   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:38.484444   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:38.521831   64758 cri.go:89] found id: ""
	I0804 00:17:38.521858   64758 logs.go:276] 0 containers: []
	W0804 00:17:38.521869   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:38.521880   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:38.521893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:38.570540   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:38.570569   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:38.624921   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:38.624953   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:38.639451   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:38.639477   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:38.714435   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:38.714459   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:38.714475   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:41.295160   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:41.310032   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:41.310108   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:41.350363   64758 cri.go:89] found id: ""
	I0804 00:17:41.350393   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.350404   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:41.350412   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:41.350475   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:41.391662   64758 cri.go:89] found id: ""
	I0804 00:17:41.391691   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.391698   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:41.391703   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:41.391760   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:41.429653   64758 cri.go:89] found id: ""
	I0804 00:17:41.429678   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.429686   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:41.429692   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:41.429739   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:41.469456   64758 cri.go:89] found id: ""
	I0804 00:17:41.469483   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.469494   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:41.469505   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:41.469566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:41.506124   64758 cri.go:89] found id: ""
	I0804 00:17:41.506154   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.506164   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:41.506171   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:41.506234   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:41.543139   64758 cri.go:89] found id: ""
	I0804 00:17:41.543171   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.543182   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:41.543190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:41.543252   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:41.580537   64758 cri.go:89] found id: ""
	I0804 00:17:41.580568   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.580578   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:41.580585   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:41.580652   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:41.619828   64758 cri.go:89] found id: ""
	I0804 00:17:41.619854   64758 logs.go:276] 0 containers: []
	W0804 00:17:41.619862   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:41.619869   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:41.619882   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:41.660749   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:41.660780   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:41.712889   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:41.712924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:41.726422   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:41.726447   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:41.805673   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:41.805697   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:41.805712   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:37.110193   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:39.110927   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:41.111203   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:39.327800   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:41.327910   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:43.099815   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:45.101262   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:44.386563   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:44.399891   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:44.399954   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:44.434270   64758 cri.go:89] found id: ""
	I0804 00:17:44.434297   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.434305   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:44.434311   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:44.434372   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:44.469423   64758 cri.go:89] found id: ""
	I0804 00:17:44.469454   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.469463   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:44.469468   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:44.469535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:44.505511   64758 cri.go:89] found id: ""
	I0804 00:17:44.505539   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.505547   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:44.505553   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:44.505602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:44.540897   64758 cri.go:89] found id: ""
	I0804 00:17:44.540922   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.540932   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:44.540937   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:44.540996   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:44.578722   64758 cri.go:89] found id: ""
	I0804 00:17:44.578747   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.578755   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:44.578760   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:44.578812   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:44.615838   64758 cri.go:89] found id: ""
	I0804 00:17:44.615863   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.615874   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:44.615881   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:44.615940   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:44.657695   64758 cri.go:89] found id: ""
	I0804 00:17:44.657724   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.657734   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:44.657741   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:44.657916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:44.695852   64758 cri.go:89] found id: ""
	I0804 00:17:44.695882   64758 logs.go:276] 0 containers: []
	W0804 00:17:44.695892   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:44.695901   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:44.695912   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:44.754643   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:44.754687   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:44.773964   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:44.773994   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:44.857544   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:44.857567   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:44.857583   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:44.952987   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:44.953027   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:43.610772   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:45.611480   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:43.827218   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:46.327323   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:47.600755   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.099574   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:47.504957   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:47.520153   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:47.520232   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:47.557303   64758 cri.go:89] found id: ""
	I0804 00:17:47.557326   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.557334   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:47.557339   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:47.557410   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:47.595626   64758 cri.go:89] found id: ""
	I0804 00:17:47.595655   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.595665   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:47.595675   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:47.595733   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:47.633430   64758 cri.go:89] found id: ""
	I0804 00:17:47.633458   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.633466   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:47.633472   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:47.633525   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:47.670116   64758 cri.go:89] found id: ""
	I0804 00:17:47.670140   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.670149   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:47.670154   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:47.670200   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:47.709019   64758 cri.go:89] found id: ""
	I0804 00:17:47.709042   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.709050   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:47.709055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:47.709111   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:47.745230   64758 cri.go:89] found id: ""
	I0804 00:17:47.745251   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.745259   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:47.745265   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:47.745319   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:47.787957   64758 cri.go:89] found id: ""
	I0804 00:17:47.787985   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.787996   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:47.788004   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:47.788063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:47.821451   64758 cri.go:89] found id: ""
	I0804 00:17:47.821477   64758 logs.go:276] 0 containers: []
	W0804 00:17:47.821488   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:47.821498   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:47.821516   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:47.903035   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:47.903139   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:47.903162   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:47.986659   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:47.986702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:48.037921   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:48.037951   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:48.095354   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:48.095389   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:50.613264   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:50.627717   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:50.627792   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:50.669311   64758 cri.go:89] found id: ""
	I0804 00:17:50.669338   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.669347   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:50.669370   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:50.669438   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:50.714674   64758 cri.go:89] found id: ""
	I0804 00:17:50.714704   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.714713   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:50.714718   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:50.714769   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:50.755291   64758 cri.go:89] found id: ""
	I0804 00:17:50.755318   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.755326   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:50.755332   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:50.755394   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:50.801927   64758 cri.go:89] found id: ""
	I0804 00:17:50.801955   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.801964   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:50.801970   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:50.802020   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:50.845096   64758 cri.go:89] found id: ""
	I0804 00:17:50.845121   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.845130   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:50.845136   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:50.845193   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:50.882664   64758 cri.go:89] found id: ""
	I0804 00:17:50.882694   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.882705   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:50.882712   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:50.882771   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:50.921233   64758 cri.go:89] found id: ""
	I0804 00:17:50.921260   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.921268   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:50.921273   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:50.921326   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:50.955254   64758 cri.go:89] found id: ""
	I0804 00:17:50.955286   64758 logs.go:276] 0 containers: []
	W0804 00:17:50.955298   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:50.955311   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:50.955329   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:51.010001   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:51.010037   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:51.024943   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:51.024966   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:51.096095   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:51.096123   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:51.096139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:51.177829   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:51.177864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:47.611778   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.110408   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:48.328693   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:50.828022   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:52.609609   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:55.100616   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:53.720665   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:53.736318   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:53.736380   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:53.772887   64758 cri.go:89] found id: ""
	I0804 00:17:53.772916   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.772926   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:53.772934   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:53.772995   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:53.811771   64758 cri.go:89] found id: ""
	I0804 00:17:53.811797   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.811837   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:53.811845   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:53.811906   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:53.846684   64758 cri.go:89] found id: ""
	I0804 00:17:53.846716   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.846726   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:53.846736   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:53.846798   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:53.883550   64758 cri.go:89] found id: ""
	I0804 00:17:53.883581   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.883592   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:53.883600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:53.883662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:53.921031   64758 cri.go:89] found id: ""
	I0804 00:17:53.921061   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.921072   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:53.921080   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:53.921153   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:53.960338   64758 cri.go:89] found id: ""
	I0804 00:17:53.960364   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.960374   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:53.960381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:53.960441   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:53.998404   64758 cri.go:89] found id: ""
	I0804 00:17:53.998434   64758 logs.go:276] 0 containers: []
	W0804 00:17:53.998450   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:53.998458   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:53.998520   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:54.033417   64758 cri.go:89] found id: ""
	I0804 00:17:54.033444   64758 logs.go:276] 0 containers: []
	W0804 00:17:54.033453   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:54.033461   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:54.033473   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:54.071945   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:54.071971   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:54.124614   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:54.124644   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:54.140757   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:54.140783   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:54.241735   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:54.241754   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:54.241769   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:56.821591   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:56.836569   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:56.836631   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:56.872013   64758 cri.go:89] found id: ""
	I0804 00:17:56.872039   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.872048   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:56.872054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:56.872110   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:52.612077   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:55.111566   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:52.828335   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:54.830625   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:56.831382   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:57.101663   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.600253   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:56.908022   64758 cri.go:89] found id: ""
	I0804 00:17:56.908051   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.908061   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:56.908067   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:56.908114   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:17:56.943309   64758 cri.go:89] found id: ""
	I0804 00:17:56.943336   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.943347   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:17:56.943359   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:17:56.943415   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:17:56.977799   64758 cri.go:89] found id: ""
	I0804 00:17:56.977839   64758 logs.go:276] 0 containers: []
	W0804 00:17:56.977847   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:17:56.977853   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:17:56.977916   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:17:57.015185   64758 cri.go:89] found id: ""
	I0804 00:17:57.015213   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.015223   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:17:57.015237   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:17:57.015295   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:17:57.051856   64758 cri.go:89] found id: ""
	I0804 00:17:57.051879   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.051887   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:17:57.051893   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:17:57.051944   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:17:57.086349   64758 cri.go:89] found id: ""
	I0804 00:17:57.086376   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.086387   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:17:57.086393   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:17:57.086439   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:17:57.125005   64758 cri.go:89] found id: ""
	I0804 00:17:57.125048   64758 logs.go:276] 0 containers: []
	W0804 00:17:57.125064   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:17:57.125076   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:17:57.125090   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:17:57.200348   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:17:57.200382   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:17:57.240899   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:17:57.240924   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.294331   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:17:57.294375   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:17:57.308388   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:17:57.308429   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:17:57.382602   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:17:59.883070   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:17:59.897055   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:17:59.897116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:17:59.932983   64758 cri.go:89] found id: ""
	I0804 00:17:59.933012   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.933021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:17:59.933029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:17:59.933088   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:17:59.971781   64758 cri.go:89] found id: ""
	I0804 00:17:59.971807   64758 logs.go:276] 0 containers: []
	W0804 00:17:59.971815   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:17:59.971820   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:17:59.971878   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:00.008381   64758 cri.go:89] found id: ""
	I0804 00:18:00.008406   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.008414   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:00.008419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:00.008483   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:00.053257   64758 cri.go:89] found id: ""
	I0804 00:18:00.053281   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.053290   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:00.053295   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:00.053342   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:00.089891   64758 cri.go:89] found id: ""
	I0804 00:18:00.089925   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.089936   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:00.089943   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:00.090008   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:00.129833   64758 cri.go:89] found id: ""
	I0804 00:18:00.129863   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.129875   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:00.129884   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:00.129942   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:00.181324   64758 cri.go:89] found id: ""
	I0804 00:18:00.181390   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.181403   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:00.181410   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:00.181471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:00.224426   64758 cri.go:89] found id: ""
	I0804 00:18:00.224451   64758 logs.go:276] 0 containers: []
	W0804 00:18:00.224459   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:00.224467   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:00.224481   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:00.240122   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:00.240155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:00.317324   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:00.317346   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:00.317379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:00.398917   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:00.398952   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:00.440730   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:00.440758   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:17:57.111741   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.611509   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:17:59.327597   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:01.328678   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:02.099384   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:04.100512   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:02.992128   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:03.006787   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:03.006870   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:03.041291   64758 cri.go:89] found id: ""
	I0804 00:18:03.041321   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.041332   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:03.041341   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:03.041427   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:03.077822   64758 cri.go:89] found id: ""
	I0804 00:18:03.077851   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.077863   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:03.077871   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:03.077928   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:03.116579   64758 cri.go:89] found id: ""
	I0804 00:18:03.116603   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.116611   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:03.116616   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:03.116662   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:03.154904   64758 cri.go:89] found id: ""
	I0804 00:18:03.154931   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.154942   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:03.154950   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:03.155018   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:03.190300   64758 cri.go:89] found id: ""
	I0804 00:18:03.190328   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.190341   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:03.190349   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:03.190413   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:03.225975   64758 cri.go:89] found id: ""
	I0804 00:18:03.226004   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.226016   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:03.226023   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:03.226087   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:03.271499   64758 cri.go:89] found id: ""
	I0804 00:18:03.271525   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.271535   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:03.271543   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:03.271602   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:03.308643   64758 cri.go:89] found id: ""
	I0804 00:18:03.308668   64758 logs.go:276] 0 containers: []
	W0804 00:18:03.308675   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:03.308684   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:03.308698   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:03.324528   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:03.324562   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:03.401102   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:03.401125   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:03.401139   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:03.481817   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:03.481864   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:03.522568   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:03.522601   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.074678   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:06.089765   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:06.089844   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:06.128372   64758 cri.go:89] found id: ""
	I0804 00:18:06.128400   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.128411   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:06.128419   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:06.128467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:06.169488   64758 cri.go:89] found id: ""
	I0804 00:18:06.169515   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.169525   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:06.169532   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:06.169590   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:06.207969   64758 cri.go:89] found id: ""
	I0804 00:18:06.207998   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.208009   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:06.208015   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:06.208067   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:06.244497   64758 cri.go:89] found id: ""
	I0804 00:18:06.244521   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.244529   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:06.244535   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:06.244592   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:06.282905   64758 cri.go:89] found id: ""
	I0804 00:18:06.282935   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.282945   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:06.282952   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:06.283013   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:06.322498   64758 cri.go:89] found id: ""
	I0804 00:18:06.322523   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.322530   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:06.322537   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:06.322583   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:06.361377   64758 cri.go:89] found id: ""
	I0804 00:18:06.361402   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.361412   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:06.361420   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:06.361485   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:06.402082   64758 cri.go:89] found id: ""
	I0804 00:18:06.402112   64758 logs.go:276] 0 containers: []
	W0804 00:18:06.402120   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:06.402128   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:06.402141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:06.452052   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:06.452089   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:06.466695   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:06.466734   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:06.546115   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:06.546140   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:06.546155   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:06.639671   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:06.639708   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:02.111360   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:04.610774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:06.612557   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:03.330392   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:05.828925   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:06.603713   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:09.100060   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:09.193473   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:09.207696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:09.207755   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:09.247757   64758 cri.go:89] found id: ""
	I0804 00:18:09.247784   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.247795   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:09.247802   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:09.247867   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:09.285516   64758 cri.go:89] found id: ""
	I0804 00:18:09.285549   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.285559   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:09.285567   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:09.285628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:09.321689   64758 cri.go:89] found id: ""
	I0804 00:18:09.321715   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.321725   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:09.321732   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:09.321789   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:09.358135   64758 cri.go:89] found id: ""
	I0804 00:18:09.358158   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.358166   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:09.358176   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:09.358223   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:09.393642   64758 cri.go:89] found id: ""
	I0804 00:18:09.393667   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.393675   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:09.393681   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:09.393730   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:09.430651   64758 cri.go:89] found id: ""
	I0804 00:18:09.430674   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.430683   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:09.430689   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:09.430734   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:09.472433   64758 cri.go:89] found id: ""
	I0804 00:18:09.472460   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.472469   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:09.472474   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:09.472533   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:09.511147   64758 cri.go:89] found id: ""
	I0804 00:18:09.511171   64758 logs.go:276] 0 containers: []
	W0804 00:18:09.511179   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:09.511187   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:09.511207   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:09.560099   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:09.560142   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:09.574609   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:09.574641   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:09.646863   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:09.646891   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:09.646906   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:09.727309   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:09.727352   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:09.111726   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:11.611445   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:08.329278   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:10.827361   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:11.600326   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:14.099811   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:12.268925   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:12.284737   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:12.284813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:12.326015   64758 cri.go:89] found id: ""
	I0804 00:18:12.326036   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.326044   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:12.326049   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:12.326095   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:12.374096   64758 cri.go:89] found id: ""
	I0804 00:18:12.374129   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.374138   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:12.374143   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:12.374235   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:12.426467   64758 cri.go:89] found id: ""
	I0804 00:18:12.426493   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.426502   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:12.426509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:12.426570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:12.485034   64758 cri.go:89] found id: ""
	I0804 00:18:12.485060   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.485072   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:12.485079   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:12.485138   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:12.523490   64758 cri.go:89] found id: ""
	I0804 00:18:12.523517   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.523525   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:12.523530   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:12.523577   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:12.563318   64758 cri.go:89] found id: ""
	I0804 00:18:12.563347   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.563358   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:12.563365   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:12.563425   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:12.600455   64758 cri.go:89] found id: ""
	I0804 00:18:12.600482   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.600492   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:12.600499   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:12.600566   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:12.641146   64758 cri.go:89] found id: ""
	I0804 00:18:12.641170   64758 logs.go:276] 0 containers: []
	W0804 00:18:12.641178   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:12.641186   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:12.641197   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:12.697240   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:12.697274   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:12.711399   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:12.711432   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:12.794022   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:12.794050   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:12.794067   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:12.881327   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:12.881379   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:15.425765   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:15.439338   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:15.439420   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:15.477964   64758 cri.go:89] found id: ""
	I0804 00:18:15.477991   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.478002   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:15.478009   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:15.478069   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:15.514554   64758 cri.go:89] found id: ""
	I0804 00:18:15.514574   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.514583   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:15.514588   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:15.514636   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:15.549702   64758 cri.go:89] found id: ""
	I0804 00:18:15.549732   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.549741   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:15.549747   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:15.549813   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:15.584619   64758 cri.go:89] found id: ""
	I0804 00:18:15.584663   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.584675   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:15.584683   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:15.584746   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:15.625084   64758 cri.go:89] found id: ""
	I0804 00:18:15.625111   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.625121   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:15.625128   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:15.625192   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:15.666629   64758 cri.go:89] found id: ""
	I0804 00:18:15.666655   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.666664   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:15.666673   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:15.666726   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:15.704287   64758 cri.go:89] found id: ""
	I0804 00:18:15.704316   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.704324   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:15.704330   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:15.704383   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:15.740629   64758 cri.go:89] found id: ""
	I0804 00:18:15.740659   64758 logs.go:276] 0 containers: []
	W0804 00:18:15.740668   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:15.740678   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:15.740702   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:15.794093   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:15.794124   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:15.807629   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:15.807659   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:15.887638   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:15.887665   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:15.887680   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:15.972935   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:15.972978   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:13.611758   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:15.613472   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:13.327640   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:15.827432   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:16.100599   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:18.603192   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:18.518022   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:18.532360   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:18.532433   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:18.565519   64758 cri.go:89] found id: ""
	I0804 00:18:18.565544   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.565552   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:18.565557   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:18.565612   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:18.599939   64758 cri.go:89] found id: ""
	I0804 00:18:18.599967   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.599978   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:18.599985   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:18.600055   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:18.639035   64758 cri.go:89] found id: ""
	I0804 00:18:18.639062   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.639070   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:18.639076   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:18.639124   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:18.677188   64758 cri.go:89] found id: ""
	I0804 00:18:18.677210   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.677218   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:18.677223   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:18.677268   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:18.715892   64758 cri.go:89] found id: ""
	I0804 00:18:18.715921   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.715932   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:18.715940   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:18.716005   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:18.752274   64758 cri.go:89] found id: ""
	I0804 00:18:18.752298   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.752307   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:18.752313   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:18.752368   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:18.795251   64758 cri.go:89] found id: ""
	I0804 00:18:18.795279   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.795288   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:18.795293   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:18.795353   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.830842   64758 cri.go:89] found id: ""
	I0804 00:18:18.830866   64758 logs.go:276] 0 containers: []
	W0804 00:18:18.830874   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:18.830882   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:18.830893   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:18.883687   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:18.883719   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:18.898406   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:18.898433   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:18.973191   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:18.973215   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:18.973231   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:19.054094   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:19.054141   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:21.597245   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:21.612534   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:21.612605   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:21.649391   64758 cri.go:89] found id: ""
	I0804 00:18:21.649415   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.649426   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:21.649434   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:21.649492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:21.683202   64758 cri.go:89] found id: ""
	I0804 00:18:21.683226   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.683233   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:21.683244   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:21.683300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:21.717450   64758 cri.go:89] found id: ""
	I0804 00:18:21.717475   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.717484   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:21.717489   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:21.717547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:21.752559   64758 cri.go:89] found id: ""
	I0804 00:18:21.752588   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.752596   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:21.752602   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:21.752650   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:21.788336   64758 cri.go:89] found id: ""
	I0804 00:18:21.788364   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.788375   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:21.788381   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:21.788428   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:21.829404   64758 cri.go:89] found id: ""
	I0804 00:18:21.829428   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.829436   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:21.829443   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:21.829502   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:21.869473   64758 cri.go:89] found id: ""
	I0804 00:18:21.869504   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.869515   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:21.869521   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:21.869587   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:18.111377   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:20.610253   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:17.827889   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:20.327830   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:21.100486   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:23.599788   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:25.601620   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:21.909883   64758 cri.go:89] found id: ""
	I0804 00:18:21.909907   64758 logs.go:276] 0 containers: []
	W0804 00:18:21.909915   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:21.909923   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:21.909940   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:21.925038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:21.925071   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:22.000261   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.000281   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:22.000294   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:22.082813   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:22.082846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:22.126741   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:22.126774   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:24.677246   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:24.692404   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:24.692467   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:24.739001   64758 cri.go:89] found id: ""
	I0804 00:18:24.739039   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.739049   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:24.739054   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:24.739119   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:24.779558   64758 cri.go:89] found id: ""
	I0804 00:18:24.779586   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.779597   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:24.779605   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:24.779664   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:24.819257   64758 cri.go:89] found id: ""
	I0804 00:18:24.819284   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.819295   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:24.819301   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:24.819363   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:24.862504   64758 cri.go:89] found id: ""
	I0804 00:18:24.862531   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.862539   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:24.862544   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:24.862599   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:24.899605   64758 cri.go:89] found id: ""
	I0804 00:18:24.899637   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.899649   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:24.899656   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:24.899716   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:24.936575   64758 cri.go:89] found id: ""
	I0804 00:18:24.936604   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.936612   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:24.936618   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:24.936667   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:24.971736   64758 cri.go:89] found id: ""
	I0804 00:18:24.971764   64758 logs.go:276] 0 containers: []
	W0804 00:18:24.971775   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:24.971782   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:24.971851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:25.010214   64758 cri.go:89] found id: ""
	I0804 00:18:25.010244   64758 logs.go:276] 0 containers: []
	W0804 00:18:25.010253   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:25.010265   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:25.010279   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:25.091145   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:25.091186   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:25.137574   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:25.137603   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:25.189559   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:25.189593   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:25.204725   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:25.204763   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:25.278903   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:22.612077   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:25.111666   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:22.827542   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:24.829587   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:27.326922   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:28.100576   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:30.603955   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:27.779500   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:27.793548   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:27.793628   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:27.830811   64758 cri.go:89] found id: ""
	I0804 00:18:27.830844   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.830854   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:27.830862   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:27.830919   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:27.869966   64758 cri.go:89] found id: ""
	I0804 00:18:27.869991   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.869998   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:27.870004   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:27.870062   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:27.909474   64758 cri.go:89] found id: ""
	I0804 00:18:27.909496   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.909504   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:27.909509   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:27.909567   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:27.948588   64758 cri.go:89] found id: ""
	I0804 00:18:27.948613   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.948625   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:27.948632   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:27.948704   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:27.991957   64758 cri.go:89] found id: ""
	I0804 00:18:27.991979   64758 logs.go:276] 0 containers: []
	W0804 00:18:27.991987   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:27.991993   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:27.992052   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:28.029516   64758 cri.go:89] found id: ""
	I0804 00:18:28.029544   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.029555   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:28.029562   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:28.029627   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:28.067851   64758 cri.go:89] found id: ""
	I0804 00:18:28.067879   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.067891   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:28.067898   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:28.067957   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:28.107488   64758 cri.go:89] found id: ""
	I0804 00:18:28.107514   64758 logs.go:276] 0 containers: []
	W0804 00:18:28.107524   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:28.107534   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:28.107548   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:28.158490   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:28.158523   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:28.172000   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:28.172030   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:28.247803   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:28.247823   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:28.247839   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:28.326695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:28.326727   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:30.867241   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:30.881074   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:30.881146   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:30.919078   64758 cri.go:89] found id: ""
	I0804 00:18:30.919105   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.919115   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:30.919122   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:30.919184   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:30.954436   64758 cri.go:89] found id: ""
	I0804 00:18:30.954463   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.954474   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:30.954481   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:30.954546   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:30.993080   64758 cri.go:89] found id: ""
	I0804 00:18:30.993110   64758 logs.go:276] 0 containers: []
	W0804 00:18:30.993121   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:30.993129   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:30.993188   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:31.031465   64758 cri.go:89] found id: ""
	I0804 00:18:31.031493   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.031504   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:31.031512   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:31.031570   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:31.067374   64758 cri.go:89] found id: ""
	I0804 00:18:31.067405   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.067416   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:31.067423   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:31.067493   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:31.104021   64758 cri.go:89] found id: ""
	I0804 00:18:31.104048   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.104059   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:31.104066   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:31.104128   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:31.146995   64758 cri.go:89] found id: ""
	I0804 00:18:31.147023   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.147033   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:31.147040   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:31.147106   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:31.184708   64758 cri.go:89] found id: ""
	I0804 00:18:31.184739   64758 logs.go:276] 0 containers: []
	W0804 00:18:31.184749   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:31.184760   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:31.184776   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:31.237743   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:31.237781   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:31.252038   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:31.252070   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:31.326357   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:31.326380   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:31.326401   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:31.408212   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:31.408256   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:27.610666   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:29.610899   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:31.611472   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:29.827980   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:32.326666   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:33.099814   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:35.100740   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:33.954396   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:33.968311   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:33.968384   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:34.006574   64758 cri.go:89] found id: ""
	I0804 00:18:34.006605   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.006625   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:34.006635   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:34.006698   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:34.042400   64758 cri.go:89] found id: ""
	I0804 00:18:34.042427   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.042435   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:34.042441   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:34.042492   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:34.080769   64758 cri.go:89] found id: ""
	I0804 00:18:34.080793   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.080804   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:34.080810   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:34.080877   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:34.118283   64758 cri.go:89] found id: ""
	I0804 00:18:34.118311   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.118320   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:34.118326   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:34.118377   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:34.153679   64758 cri.go:89] found id: ""
	I0804 00:18:34.153708   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.153719   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:34.153727   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:34.153780   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:34.189618   64758 cri.go:89] found id: ""
	I0804 00:18:34.189674   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.189686   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:34.189696   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:34.189770   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:34.224628   64758 cri.go:89] found id: ""
	I0804 00:18:34.224666   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.224677   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:34.224684   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:34.224744   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:34.265343   64758 cri.go:89] found id: ""
	I0804 00:18:34.265389   64758 logs.go:276] 0 containers: []
	W0804 00:18:34.265399   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:34.265409   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:34.265428   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:34.337992   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:34.338014   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:34.338025   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:34.420224   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:34.420263   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:34.462009   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:34.462042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:34.520087   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:34.520120   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:34.111351   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:36.112271   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:34.328807   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:36.827190   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:37.599447   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:40.099291   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:37.035398   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:37.048955   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:37.049024   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:37.087433   64758 cri.go:89] found id: ""
	I0804 00:18:37.087460   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.087470   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:37.087478   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:37.087542   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:37.128227   64758 cri.go:89] found id: ""
	I0804 00:18:37.128255   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.128267   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:37.128275   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:37.128328   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:37.165371   64758 cri.go:89] found id: ""
	I0804 00:18:37.165405   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.165415   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:37.165424   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:37.165486   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:37.201168   64758 cri.go:89] found id: ""
	I0804 00:18:37.201198   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.201209   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:37.201217   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:37.201278   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:37.237378   64758 cri.go:89] found id: ""
	I0804 00:18:37.237406   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.237414   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:37.237419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:37.237465   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:37.273425   64758 cri.go:89] found id: ""
	I0804 00:18:37.273456   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.273467   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:37.273475   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:37.273547   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:37.313019   64758 cri.go:89] found id: ""
	I0804 00:18:37.313048   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.313056   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:37.313061   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:37.313116   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:37.354741   64758 cri.go:89] found id: ""
	I0804 00:18:37.354771   64758 logs.go:276] 0 containers: []
	W0804 00:18:37.354779   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:37.354788   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:37.354800   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:37.408703   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:37.408740   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:37.423393   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:37.423419   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:37.497460   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:37.497487   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:37.497501   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:37.579811   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:37.579856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:40.122872   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:40.139106   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:40.139177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:40.178571   64758 cri.go:89] found id: ""
	I0804 00:18:40.178599   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.178610   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:40.178617   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:40.178679   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:40.215680   64758 cri.go:89] found id: ""
	I0804 00:18:40.215714   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.215722   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:40.215728   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:40.215776   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:40.250618   64758 cri.go:89] found id: ""
	I0804 00:18:40.250647   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.250658   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:40.250666   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:40.250729   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:40.289195   64758 cri.go:89] found id: ""
	I0804 00:18:40.289223   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.289233   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:40.289240   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:40.289296   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:40.330961   64758 cri.go:89] found id: ""
	I0804 00:18:40.330988   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.330998   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:40.331006   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:40.331056   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:40.376435   64758 cri.go:89] found id: ""
	I0804 00:18:40.376465   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.376478   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:40.376487   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:40.376558   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:40.416415   64758 cri.go:89] found id: ""
	I0804 00:18:40.416447   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.416459   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:40.416465   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:40.416535   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:40.452958   64758 cri.go:89] found id: ""
	I0804 00:18:40.452996   64758 logs.go:276] 0 containers: []
	W0804 00:18:40.453007   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:40.453018   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:40.453036   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:40.503775   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:40.503808   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:40.517825   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:40.517855   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:40.587818   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:40.587847   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:40.587861   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:40.674139   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:40.674183   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:38.611068   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:40.611923   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:39.326489   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:41.327327   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:42.100795   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:44.602441   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:43.217266   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:43.232190   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:43.232262   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:43.270127   64758 cri.go:89] found id: ""
	I0804 00:18:43.270156   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.270163   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:43.270169   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:43.270219   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:43.309401   64758 cri.go:89] found id: ""
	I0804 00:18:43.309429   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.309439   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:43.309446   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:43.309503   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:43.347210   64758 cri.go:89] found id: ""
	I0804 00:18:43.347235   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.347242   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:43.347247   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:43.347300   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:43.382548   64758 cri.go:89] found id: ""
	I0804 00:18:43.382578   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.382588   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:43.382595   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:43.382658   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:43.422076   64758 cri.go:89] found id: ""
	I0804 00:18:43.422102   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.422113   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:43.422121   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:43.422168   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:43.458525   64758 cri.go:89] found id: ""
	I0804 00:18:43.458552   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.458560   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:43.458566   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:43.458623   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:43.498134   64758 cri.go:89] found id: ""
	I0804 00:18:43.498157   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.498165   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:43.498170   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:43.498217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:43.543289   64758 cri.go:89] found id: ""
	I0804 00:18:43.543312   64758 logs.go:276] 0 containers: []
	W0804 00:18:43.543320   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:43.543328   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:43.543338   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:43.593489   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:43.593521   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:43.607838   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:43.607869   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:43.682791   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:43.682813   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:43.682826   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:43.761695   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:43.761737   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.305385   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:46.320003   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:46.320063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:46.367941   64758 cri.go:89] found id: ""
	I0804 00:18:46.367969   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.367980   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:46.367986   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:46.368058   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:46.422540   64758 cri.go:89] found id: ""
	I0804 00:18:46.422563   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.422572   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:46.422578   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:46.422637   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:46.470192   64758 cri.go:89] found id: ""
	I0804 00:18:46.470238   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.470248   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:46.470257   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:46.470316   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:46.512375   64758 cri.go:89] found id: ""
	I0804 00:18:46.512399   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.512408   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:46.512413   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:46.512471   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:46.546547   64758 cri.go:89] found id: ""
	I0804 00:18:46.546580   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.546592   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:46.546600   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:46.546665   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:46.583598   64758 cri.go:89] found id: ""
	I0804 00:18:46.583621   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.583630   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:46.583636   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:46.583692   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:46.621066   64758 cri.go:89] found id: ""
	I0804 00:18:46.621101   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.621116   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:46.621122   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:46.621177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:46.654115   64758 cri.go:89] found id: ""
	I0804 00:18:46.654149   64758 logs.go:276] 0 containers: []
	W0804 00:18:46.654162   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:46.654174   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:46.654191   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:46.738542   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:46.738582   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:46.778894   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:46.778923   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:46.833225   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:46.833257   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:46.847222   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:46.847247   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:18:42.612522   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:45.110927   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:43.327420   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:45.327936   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:47.328380   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:46.604576   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.100232   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	W0804 00:18:46.922590   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.423639   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:49.437417   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:49.437490   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:49.474889   64758 cri.go:89] found id: ""
	I0804 00:18:49.474914   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.474923   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:49.474929   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:49.474986   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:49.512860   64758 cri.go:89] found id: ""
	I0804 00:18:49.512889   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.512900   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:49.512908   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:49.512965   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:49.550558   64758 cri.go:89] found id: ""
	I0804 00:18:49.550594   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.550603   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:49.550611   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:49.550671   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:49.587779   64758 cri.go:89] found id: ""
	I0804 00:18:49.587810   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.587823   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:49.587831   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:49.587890   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:49.630307   64758 cri.go:89] found id: ""
	I0804 00:18:49.630333   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.630344   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:49.630352   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:49.630411   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:49.665013   64758 cri.go:89] found id: ""
	I0804 00:18:49.665046   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.665057   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:49.665064   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:49.665127   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:49.701375   64758 cri.go:89] found id: ""
	I0804 00:18:49.701401   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.701410   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:49.701415   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:49.701472   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:49.737237   64758 cri.go:89] found id: ""
	I0804 00:18:49.737261   64758 logs.go:276] 0 containers: []
	W0804 00:18:49.737269   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:49.737278   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:49.737291   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:49.790998   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:49.791033   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:49.804933   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:49.804965   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:49.877997   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:49.878019   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:49.878035   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:49.963836   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:49.963872   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:47.611774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.612581   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.616560   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:49.827900   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.829950   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:51.599613   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:53.600496   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:52.506621   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:52.521482   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:52.521553   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:52.555980   64758 cri.go:89] found id: ""
	I0804 00:18:52.556010   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.556021   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:52.556029   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:52.556094   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:52.593088   64758 cri.go:89] found id: ""
	I0804 00:18:52.593119   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.593130   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:52.593138   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:52.593197   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:52.632058   64758 cri.go:89] found id: ""
	I0804 00:18:52.632088   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.632107   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:52.632115   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:52.632177   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:52.668701   64758 cri.go:89] found id: ""
	I0804 00:18:52.668730   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.668739   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:52.668745   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:52.668814   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:52.705041   64758 cri.go:89] found id: ""
	I0804 00:18:52.705068   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.705075   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:52.705089   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:52.705149   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:52.743304   64758 cri.go:89] found id: ""
	I0804 00:18:52.743327   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.743335   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:52.743340   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:52.743397   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:52.781020   64758 cri.go:89] found id: ""
	I0804 00:18:52.781050   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.781060   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:52.781073   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:52.781134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:52.820979   64758 cri.go:89] found id: ""
	I0804 00:18:52.821004   64758 logs.go:276] 0 containers: []
	W0804 00:18:52.821014   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:52.821024   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:52.821042   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:52.876450   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:52.876488   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:52.890529   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:52.890566   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:52.960682   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:52.960710   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:52.960725   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:53.044000   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:53.044040   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:55.601594   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:55.615574   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:18:55.615645   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:18:55.655116   64758 cri.go:89] found id: ""
	I0804 00:18:55.655146   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.655157   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:18:55.655164   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:18:55.655217   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:18:55.695809   64758 cri.go:89] found id: ""
	I0804 00:18:55.695837   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.695846   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:18:55.695851   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:18:55.695909   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:18:55.732784   64758 cri.go:89] found id: ""
	I0804 00:18:55.732811   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.732822   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:18:55.732828   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:18:55.732920   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:18:55.773316   64758 cri.go:89] found id: ""
	I0804 00:18:55.773338   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.773347   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:18:55.773368   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:18:55.773416   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:18:55.808886   64758 cri.go:89] found id: ""
	I0804 00:18:55.808913   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.808924   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:18:55.808931   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:18:55.808990   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:18:55.848471   64758 cri.go:89] found id: ""
	I0804 00:18:55.848499   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.848507   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:18:55.848513   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:18:55.848568   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:18:55.884088   64758 cri.go:89] found id: ""
	I0804 00:18:55.884117   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.884128   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:18:55.884134   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:18:55.884194   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:18:55.918194   64758 cri.go:89] found id: ""
	I0804 00:18:55.918222   64758 logs.go:276] 0 containers: []
	W0804 00:18:55.918233   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:18:55.918243   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:18:55.918264   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:18:55.932685   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:18:55.932717   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:18:56.003817   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:18:56.003840   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:18:56.003856   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:18:56.087804   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:18:56.087846   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:18:56.129959   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:18:56.129993   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:18:54.111584   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.610664   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:54.327283   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.328332   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:56.100620   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.601669   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:00.604763   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.685077   64758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:18:58.698624   64758 kubeadm.go:597] duration metric: took 4m4.179874556s to restartPrimaryControlPlane
	W0804 00:18:58.698704   64758 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:18:58.698731   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:18:58.611004   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:00.611252   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:18:58.828188   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:01.329218   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.100214   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:05.101275   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.967117   64758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.268366381s)
	I0804 00:19:03.967202   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:19:03.982098   64758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:19:03.991994   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:19:04.002780   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:19:04.002802   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:19:04.002845   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:19:04.012216   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:19:04.012279   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:19:04.021463   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:19:04.030689   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:19:04.030743   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:19:04.040801   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.050496   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:19:04.050558   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:19:04.060782   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:19:04.071595   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:19:04.071673   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:19:04.082123   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:19:04.313172   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:19:02.611712   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:05.111575   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:03.827427   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:06.327317   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:07.599775   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:09.599814   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:07.611608   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:10.110194   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:08.333681   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:10.829626   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:11.601081   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:14.099098   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:12.110388   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:14.111401   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:16.610774   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:13.327035   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:15.327695   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:17.327749   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:16.100543   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:18.602723   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:20.603470   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:18.611336   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:21.111798   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:19.329120   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:21.826869   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:22.605600   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:25.101500   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:23.610581   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:25.610814   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:24.326982   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:26.827772   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:27.599557   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:29.600283   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:28.110748   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:30.111027   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:29.327031   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:31.328581   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:32.101571   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:34.601251   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:32.610784   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:34.612611   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:33.828237   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:35.828319   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:37.099717   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:39.100492   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:37.111009   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:39.610805   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:38.326730   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:40.327548   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:42.330066   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:41.600239   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:43.600686   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:45.601464   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:42.110900   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:44.610221   65087 pod_ready.go:102] pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:45.605124   65087 pod_ready.go:81] duration metric: took 4m0.000843677s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" ...
	E0804 00:19:45.605152   65087 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5xfgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0804 00:19:45.605175   65087 pod_ready.go:38] duration metric: took 4m13.615224515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:19:45.605208   65087 kubeadm.go:597] duration metric: took 4m21.736484018s to restartPrimaryControlPlane
	W0804 00:19:45.605273   65087 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0804 00:19:45.605304   65087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:19:44.827547   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:47.329541   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:48.101237   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:50.603754   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:49.826561   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:51.828643   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:53.100714   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:55.102037   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:53.832996   65441 pod_ready.go:102] pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:54.830906   65441 pod_ready.go:81] duration metric: took 4m0.010324747s for pod "metrics-server-569cc877fc-646qm" in "kube-system" namespace to be "Ready" ...
	E0804 00:19:54.830936   65441 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0804 00:19:54.830947   65441 pod_ready.go:38] duration metric: took 4m4.842701336s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:19:54.830968   65441 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:19:54.831003   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:19:54.831070   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:19:54.887772   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:54.887804   65441 cri.go:89] found id: ""
	I0804 00:19:54.887815   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:19:54.887877   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:54.892740   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:19:54.892801   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:19:54.943044   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:54.943082   65441 cri.go:89] found id: ""
	I0804 00:19:54.943092   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:19:54.943164   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:54.947699   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:19:54.947765   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:19:54.997280   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:54.997302   65441 cri.go:89] found id: ""
	I0804 00:19:54.997311   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:19:54.997380   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.005574   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:19:55.005642   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:19:55.066824   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:55.066845   65441 cri.go:89] found id: ""
	I0804 00:19:55.066852   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:19:55.066906   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.071713   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:19:55.071779   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:19:55.116381   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:55.116406   65441 cri.go:89] found id: ""
	I0804 00:19:55.116414   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:19:55.116468   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.121174   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:19:55.121237   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:19:55.168300   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:55.168323   65441 cri.go:89] found id: ""
	I0804 00:19:55.168331   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:19:55.168381   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.173450   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:19:55.173509   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:19:55.218999   65441 cri.go:89] found id: ""
	I0804 00:19:55.219030   65441 logs.go:276] 0 containers: []
	W0804 00:19:55.219041   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:19:55.219048   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:19:55.219115   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:19:55.263696   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:55.263723   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:55.263727   65441 cri.go:89] found id: ""
	I0804 00:19:55.263734   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:19:55.263789   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.269001   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:55.277864   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:19:55.277899   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:55.323692   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:19:55.323729   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:55.364971   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:19:55.365005   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:19:55.871942   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:19:55.871983   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:19:55.929828   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:19:55.929869   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:19:55.987389   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:19:55.987425   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:56.041330   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:19:56.041381   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:56.082524   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:19:56.082556   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:56.122545   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:19:56.122572   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:56.178249   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:19:56.178288   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:56.219273   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:19:56.219300   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:19:56.235345   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:19:56.235389   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:19:56.370660   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:19:56.370692   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:57.600248   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:00.100920   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:19:58.936934   65441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:19:58.953624   65441 api_server.go:72] duration metric: took 4m14.22488371s to wait for apiserver process to appear ...
	I0804 00:19:58.953655   65441 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:19:58.953700   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:19:58.953764   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:19:58.997408   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:58.997434   65441 cri.go:89] found id: ""
	I0804 00:19:58.997443   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:19:58.997492   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.004398   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:19:59.004466   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:19:59.041483   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:59.041510   65441 cri.go:89] found id: ""
	I0804 00:19:59.041518   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:19:59.041568   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.045754   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:19:59.045825   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:19:59.081738   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:59.081756   65441 cri.go:89] found id: ""
	I0804 00:19:59.081764   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:19:59.081809   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.086297   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:19:59.086348   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:19:59.124421   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:19:59.124440   65441 cri.go:89] found id: ""
	I0804 00:19:59.124447   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:19:59.124494   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.128612   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:19:59.128677   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:19:59.165702   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:19:59.165728   65441 cri.go:89] found id: ""
	I0804 00:19:59.165737   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:19:59.165791   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.170016   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:19:59.170103   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:19:59.205275   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:19:59.205299   65441 cri.go:89] found id: ""
	I0804 00:19:59.205307   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:19:59.205377   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.209637   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:19:59.209699   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:19:59.244254   65441 cri.go:89] found id: ""
	I0804 00:19:59.244281   65441 logs.go:276] 0 containers: []
	W0804 00:19:59.244290   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:19:59.244295   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:19:59.244343   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:19:59.281850   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:19:59.281876   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:59.281880   65441 cri.go:89] found id: ""
	I0804 00:19:59.281887   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:19:59.281935   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.286423   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:19:59.291108   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:19:59.291134   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:19:59.340778   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:19:59.340808   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:19:59.379258   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:19:59.379288   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:19:59.418902   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:19:59.418932   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:19:59.875668   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:19:59.875708   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:19:59.932947   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:19:59.932980   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:19:59.980190   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:19:59.980224   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:00.024331   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:20:00.024359   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:00.064676   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:20:00.064701   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:00.117684   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:20:00.117717   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:00.153654   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:20:00.153683   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:00.200840   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:00.200869   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:00.214380   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:00.214410   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:02.101240   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:04.600064   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:02.832546   65441 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8444/healthz ...
	I0804 00:20:02.837684   65441 api_server.go:279] https://192.168.39.132:8444/healthz returned 200:
	ok
	I0804 00:20:02.838736   65441 api_server.go:141] control plane version: v1.30.3
	I0804 00:20:02.838763   65441 api_server.go:131] duration metric: took 3.885096913s to wait for apiserver health ...
	I0804 00:20:02.838773   65441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:02.838798   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:02.838856   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:02.878530   65441 cri.go:89] found id: "0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:20:02.878556   65441 cri.go:89] found id: ""
	I0804 00:20:02.878565   65441 logs.go:276] 1 containers: [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b]
	I0804 00:20:02.878628   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.883263   65441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:02.883338   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:02.921989   65441 cri.go:89] found id: "7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:20:02.922009   65441 cri.go:89] found id: ""
	I0804 00:20:02.922017   65441 logs.go:276] 1 containers: [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37]
	I0804 00:20:02.922062   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.928690   65441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:02.928767   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:02.967469   65441 cri.go:89] found id: "5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:20:02.967490   65441 cri.go:89] found id: ""
	I0804 00:20:02.967498   65441 logs.go:276] 1 containers: [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd]
	I0804 00:20:02.967544   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:02.972155   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:02.972217   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:03.011875   65441 cri.go:89] found id: "11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:03.011900   65441 cri.go:89] found id: ""
	I0804 00:20:03.011910   65441 logs.go:276] 1 containers: [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6]
	I0804 00:20:03.011966   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.016326   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:03.016395   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:03.057114   65441 cri.go:89] found id: "572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:03.057137   65441 cri.go:89] found id: ""
	I0804 00:20:03.057145   65441 logs.go:276] 1 containers: [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d]
	I0804 00:20:03.057206   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.061528   65441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:03.061592   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:03.101778   65441 cri.go:89] found id: "f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:03.101832   65441 cri.go:89] found id: ""
	I0804 00:20:03.101842   65441 logs.go:276] 1 containers: [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f]
	I0804 00:20:03.101902   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.106292   65441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:03.106368   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:03.146453   65441 cri.go:89] found id: ""
	I0804 00:20:03.146484   65441 logs.go:276] 0 containers: []
	W0804 00:20:03.146496   65441 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:03.146504   65441 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:03.146569   65441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:03.185861   65441 cri.go:89] found id: "34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:03.185884   65441 cri.go:89] found id: "53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:20:03.185887   65441 cri.go:89] found id: ""
	I0804 00:20:03.185894   65441 logs.go:276] 2 containers: [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02]
	I0804 00:20:03.185941   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.190490   65441 ssh_runner.go:195] Run: which crictl
	I0804 00:20:03.194727   65441 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:03.194750   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:03.308015   65441 logs.go:123] Gathering logs for kube-apiserver [0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b] ...
	I0804 00:20:03.308052   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0897d8c61e8a2d70f8af39c629589d61f7f6f6924257d8d65e408cfffca65b"
	I0804 00:20:03.358699   65441 logs.go:123] Gathering logs for etcd [7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37] ...
	I0804 00:20:03.358732   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b181ffd7672a5992aabf906505b7c99854e4bf31c539c6d142ddc5c9717be37"
	I0804 00:20:03.410398   65441 logs.go:123] Gathering logs for storage-provisioner [53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02] ...
	I0804 00:20:03.410430   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53cb13593bed6bc326538d7315d36bddfd8b273b98804b57382a8101ea907b02"
	I0804 00:20:03.450651   65441 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:03.450685   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:03.859092   65441 logs.go:123] Gathering logs for storage-provisioner [34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f] ...
	I0804 00:20:03.859145   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34bf0e95048791cf03c2dfa4510569c1e64fda9b8573b74397228a00f8adec6f"
	I0804 00:20:03.905500   65441 logs.go:123] Gathering logs for container status ...
	I0804 00:20:03.905529   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:03.951014   65441 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:03.951047   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:04.003275   65441 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:04.003311   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:04.017574   65441 logs.go:123] Gathering logs for coredns [5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd] ...
	I0804 00:20:04.017608   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cf9a1c37ebd176e379c8e06e89f7a66582a06ef0238f73339b0047fa6102bbd"
	I0804 00:20:04.054252   65441 logs.go:123] Gathering logs for kube-scheduler [11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6] ...
	I0804 00:20:04.054283   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7eacd29c368843202b41a466809be8cde8ab5902df34b2875c6ea225ac9a6"
	I0804 00:20:04.094524   65441 logs.go:123] Gathering logs for kube-proxy [572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d] ...
	I0804 00:20:04.094558   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 572acf711df5ed313673b36f51b8e133c3a0a908b64b7e93d9e5c882cc29042d"
	I0804 00:20:04.131163   65441 logs.go:123] Gathering logs for kube-controller-manager [f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f] ...
	I0804 00:20:04.131192   65441 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f021cd4986aa6c5393ca6e7726106893792d5681740c4b1f6f8aa697e196260f"
	I0804 00:20:06.691154   65441 system_pods.go:59] 8 kube-system pods found
	I0804 00:20:06.691193   65441 system_pods.go:61] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running
	I0804 00:20:06.691199   65441 system_pods.go:61] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running
	I0804 00:20:06.691203   65441 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running
	I0804 00:20:06.691209   65441 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running
	I0804 00:20:06.691213   65441 system_pods.go:61] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:20:06.691218   65441 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running
	I0804 00:20:06.691226   65441 system_pods.go:61] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:06.691232   65441 system_pods.go:61] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:20:06.691244   65441 system_pods.go:74] duration metric: took 3.852463199s to wait for pod list to return data ...
	I0804 00:20:06.691257   65441 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:06.693724   65441 default_sa.go:45] found service account: "default"
	I0804 00:20:06.693755   65441 default_sa.go:55] duration metric: took 2.486182ms for default service account to be created ...
	I0804 00:20:06.693767   65441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:06.698925   65441 system_pods.go:86] 8 kube-system pods found
	I0804 00:20:06.698950   65441 system_pods.go:89] "coredns-7db6d8ff4d-b8v28" [e1c179bf-e99a-4b59-b731-dac458e6d6aa] Running
	I0804 00:20:06.698956   65441 system_pods.go:89] "etcd-default-k8s-diff-port-969068" [8a89df1e-6c08-4413-bfc5-dd5dab1b5c37] Running
	I0804 00:20:06.698962   65441 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-969068" [a5c39405-44b5-47db-a33d-c2f215857bab] Running
	I0804 00:20:06.698968   65441 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-969068" [ac7361df-2d91-4f7a-b9b0-cb6ff15eaaa9] Running
	I0804 00:20:06.698972   65441 system_pods.go:89] "kube-proxy-zz7fr" [9e46c77a-ef1c-402d-807b-8d12b2e17b07] Running
	I0804 00:20:06.698976   65441 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-969068" [e8d66460-aa1a-4999-b8fb-dd7e572a9f87] Running
	I0804 00:20:06.698983   65441 system_pods.go:89] "metrics-server-569cc877fc-646qm" [c28af6f2-95c1-44ae-833a-d426ca62a169] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:06.698990   65441 system_pods.go:89] "storage-provisioner" [c58edb4a-bb0b-4d76-a279-cdcf7e14bd68] Running
	I0804 00:20:06.698997   65441 system_pods.go:126] duration metric: took 5.224971ms to wait for k8s-apps to be running ...
	I0804 00:20:06.699003   65441 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:06.699047   65441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:06.714188   65441 system_svc.go:56] duration metric: took 15.17801ms WaitForService to wait for kubelet
	I0804 00:20:06.714213   65441 kubeadm.go:582] duration metric: took 4m21.985480612s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:06.714232   65441 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:06.716717   65441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:06.716743   65441 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:06.716757   65441 node_conditions.go:105] duration metric: took 2.521245ms to run NodePressure ...
	I0804 00:20:06.716771   65441 start.go:241] waiting for startup goroutines ...
	I0804 00:20:06.716780   65441 start.go:246] waiting for cluster config update ...
	I0804 00:20:06.716796   65441 start.go:255] writing updated cluster config ...
	I0804 00:20:06.717156   65441 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:06.765983   65441 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:20:06.768482   65441 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-969068" cluster and "default" namespace by default
	I0804 00:20:06.600233   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:08.603829   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:11.852948   65087 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.247618249s)
	I0804 00:20:11.853025   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:11.870882   65087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:20:11.882005   65087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:20:11.892505   65087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:20:11.892527   65087 kubeadm.go:157] found existing configuration files:
	
	I0804 00:20:11.892570   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:20:11.902005   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:20:11.902061   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:20:11.911585   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:20:11.921837   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:20:11.921911   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:20:11.101091   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:13.607073   64502 pod_ready.go:102] pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:14.600605   64502 pod_ready.go:81] duration metric: took 4m0.007136508s for pod "metrics-server-569cc877fc-hbcm9" in "kube-system" namespace to be "Ready" ...
	E0804 00:20:14.600629   64502 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0804 00:20:14.600637   64502 pod_ready.go:38] duration metric: took 4m5.120472791s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:14.600651   64502 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:20:14.600675   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:14.600717   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:14.669699   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:14.669724   64502 cri.go:89] found id: ""
	I0804 00:20:14.669733   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:14.669789   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.674907   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:14.674978   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:14.720830   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:14.720867   64502 cri.go:89] found id: ""
	I0804 00:20:14.720877   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:14.720937   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.726667   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:14.726729   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:14.778216   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:14.778247   64502 cri.go:89] found id: ""
	I0804 00:20:14.778256   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:14.778321   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.785349   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:14.785433   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:14.836381   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:14.836408   64502 cri.go:89] found id: ""
	I0804 00:20:14.836416   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:14.836475   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.841662   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:14.841752   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:14.884803   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:14.884827   64502 cri.go:89] found id: ""
	I0804 00:20:14.884836   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:14.884904   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.890625   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:14.890696   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:14.942713   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:14.942732   64502 cri.go:89] found id: ""
	I0804 00:20:14.942739   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:14.942800   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:14.948335   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:14.948391   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:14.994869   64502 cri.go:89] found id: ""
	I0804 00:20:14.994900   64502 logs.go:276] 0 containers: []
	W0804 00:20:14.994910   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:14.994917   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:14.994985   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:15.034528   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:15.034557   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:15.034564   64502 cri.go:89] found id: ""
	I0804 00:20:15.034574   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:15.034633   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:15.039335   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:15.044600   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:15.044625   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:15.091365   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:15.091398   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:15.144896   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:15.144924   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:15.675849   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:15.675901   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:15.691640   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:15.691699   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:11.931844   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:20:11.941369   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:20:11.941430   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:20:11.951279   65087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:20:11.961201   65087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:20:11.961275   65087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:20:11.972150   65087 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:20:12.024567   65087 kubeadm.go:310] W0804 00:20:12.001791    2996 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0804 00:20:12.025287   65087 kubeadm.go:310] W0804 00:20:12.002530    2996 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0804 00:20:12.154034   65087 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:20:20.358593   65087 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0804 00:20:20.358649   65087 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:20:20.358721   65087 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:20:20.358834   65087 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:20:20.358953   65087 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0804 00:20:20.359013   65087 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:20:20.360498   65087 out.go:204]   - Generating certificates and keys ...
	I0804 00:20:20.360590   65087 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:20:20.360692   65087 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:20:20.360767   65087 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:20:20.360821   65087 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:20:20.360915   65087 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:20:20.360971   65087 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:20:20.361042   65087 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:20:20.361124   65087 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:20:20.361228   65087 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:20:20.361307   65087 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:20:20.361342   65087 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:20:20.361436   65087 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:20:20.361523   65087 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:20:20.361592   65087 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 00:20:20.361642   65087 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:20:20.361698   65087 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:20:20.361746   65087 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:20:20.361815   65087 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:20:20.361881   65087 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:20:20.363214   65087 out.go:204]   - Booting up control plane ...
	I0804 00:20:20.363312   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:20:20.363381   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:20:20.363450   65087 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:20:20.363541   65087 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:20:20.363628   65087 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:20:20.363678   65087 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:20:20.363790   65087 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 00:20:20.363889   65087 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0804 00:20:20.363960   65087 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.009132208s
	I0804 00:20:20.364044   65087 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 00:20:20.364094   65087 kubeadm.go:310] [api-check] The API server is healthy after 4.501833932s
	I0804 00:20:20.364201   65087 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 00:20:20.364321   65087 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 00:20:20.364397   65087 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 00:20:20.364585   65087 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-118016 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 00:20:20.364634   65087 kubeadm.go:310] [bootstrap-token] Using token: bbnfwa.jorg7huedw5cbtk2
	I0804 00:20:20.366569   65087 out.go:204]   - Configuring RBAC rules ...
	I0804 00:20:20.366705   65087 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 00:20:20.366823   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 00:20:20.366979   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 00:20:20.367160   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 00:20:20.367275   65087 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 00:20:20.367352   65087 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 00:20:20.367447   65087 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 00:20:20.367510   65087 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 00:20:20.367574   65087 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 00:20:20.367580   65087 kubeadm.go:310] 
	I0804 00:20:20.367629   65087 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 00:20:20.367635   65087 kubeadm.go:310] 
	I0804 00:20:20.367697   65087 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 00:20:20.367703   65087 kubeadm.go:310] 
	I0804 00:20:20.367724   65087 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 00:20:20.367784   65087 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 00:20:20.367828   65087 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 00:20:20.367834   65087 kubeadm.go:310] 
	I0804 00:20:20.367886   65087 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 00:20:20.367903   65087 kubeadm.go:310] 
	I0804 00:20:20.367971   65087 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 00:20:20.367981   65087 kubeadm.go:310] 
	I0804 00:20:20.368050   65087 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 00:20:20.368125   65087 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 00:20:20.368187   65087 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 00:20:20.368193   65087 kubeadm.go:310] 
	I0804 00:20:20.368262   65087 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 00:20:20.368353   65087 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 00:20:20.368367   65087 kubeadm.go:310] 
	I0804 00:20:20.368480   65087 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bbnfwa.jorg7huedw5cbtk2 \
	I0804 00:20:20.368588   65087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e \
	I0804 00:20:20.368614   65087 kubeadm.go:310] 	--control-plane 
	I0804 00:20:20.368621   65087 kubeadm.go:310] 
	I0804 00:20:20.368705   65087 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 00:20:20.368712   65087 kubeadm.go:310] 
	I0804 00:20:20.368810   65087 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bbnfwa.jorg7huedw5cbtk2 \
	I0804 00:20:20.368933   65087 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2b2246b32b08b4cc647aa42821e65ccf575b325968f5a33ace6356b1118f988e 
	I0804 00:20:20.368947   65087 cni.go:84] Creating CNI manager for ""
	I0804 00:20:20.368957   65087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:20:20.370303   65087 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:20:15.859131   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:15.859169   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:15.917686   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:15.917726   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:15.964285   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:15.964328   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:16.019646   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:16.019679   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:16.069379   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:16.069416   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:16.129752   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:16.129842   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:16.177015   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:16.177043   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:16.220526   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:16.220560   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:18.771509   64502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:20:18.793252   64502 api_server.go:72] duration metric: took 4m15.042389156s to wait for apiserver process to appear ...
	I0804 00:20:18.793291   64502 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:20:18.793334   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:18.793415   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:18.839339   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:18.839363   64502 cri.go:89] found id: ""
	I0804 00:20:18.839372   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:18.839432   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.843932   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:18.844005   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:18.894398   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:18.894422   64502 cri.go:89] found id: ""
	I0804 00:20:18.894432   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:18.894491   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.899596   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:18.899664   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:18.947077   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:18.947106   64502 cri.go:89] found id: ""
	I0804 00:20:18.947114   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:18.947168   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:18.952349   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:18.952431   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:18.999336   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:18.999361   64502 cri.go:89] found id: ""
	I0804 00:20:18.999377   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:18.999441   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.005419   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:19.005502   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:19.061388   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:19.061413   64502 cri.go:89] found id: ""
	I0804 00:20:19.061422   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:19.061476   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.066071   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:19.066139   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:19.111849   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:19.111872   64502 cri.go:89] found id: ""
	I0804 00:20:19.111879   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:19.111929   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.116272   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:19.116323   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:19.157387   64502 cri.go:89] found id: ""
	I0804 00:20:19.157414   64502 logs.go:276] 0 containers: []
	W0804 00:20:19.157423   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:19.157431   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:19.157493   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:19.199627   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:19.199654   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:19.199660   64502 cri.go:89] found id: ""
	I0804 00:20:19.199669   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:19.199727   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.204317   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:19.208707   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:19.208729   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:19.261684   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:19.261717   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:19.309861   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:19.309890   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:19.349376   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:19.349403   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:19.388561   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:19.388590   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:19.466119   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:19.466163   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:19.515539   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:19.515575   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:19.561529   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:19.561556   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:19.626188   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:19.626219   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:19.640348   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:19.640372   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:19.772397   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:19.772439   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:19.827217   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:19.827260   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:20.306543   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:20.306589   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:20.371388   65087 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:20:20.384738   65087 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:20:20.404547   65087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:20:20.404607   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:20.404659   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-118016 minikube.k8s.io/updated_at=2024_08_04T00_20_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=no-preload-118016 minikube.k8s.io/primary=true
	I0804 00:20:20.602476   65087 ops.go:34] apiserver oom_adj: -16
	I0804 00:20:20.602551   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:21.103011   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:21.602888   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:22.102779   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:22.603282   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:23.103337   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:23.603522   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.103510   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.603474   65087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:20:24.689895   65087 kubeadm.go:1113] duration metric: took 4.285337247s to wait for elevateKubeSystemPrivileges
	I0804 00:20:24.689931   65087 kubeadm.go:394] duration metric: took 5m0.881315877s to StartCluster
	I0804 00:20:24.689947   65087 settings.go:142] acquiring lock: {Name:mk7c995930d8fd1299955db1f1d97f2271f4f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:24.690018   65087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:20:24.691559   65087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/kubeconfig: {Name:mk61cac1f1766f94f3b99ad784ddd7962af710b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:20:24.691784   65087 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.137 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:20:24.691848   65087 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:20:24.691963   65087 addons.go:69] Setting storage-provisioner=true in profile "no-preload-118016"
	I0804 00:20:24.691977   65087 addons.go:69] Setting default-storageclass=true in profile "no-preload-118016"
	I0804 00:20:24.691999   65087 addons.go:234] Setting addon storage-provisioner=true in "no-preload-118016"
	I0804 00:20:24.692001   65087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-118016"
	I0804 00:20:24.692001   65087 config.go:182] Loaded profile config "no-preload-118016": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:20:24.692018   65087 addons.go:69] Setting metrics-server=true in profile "no-preload-118016"
	W0804 00:20:24.692007   65087 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:20:24.692068   65087 addons.go:234] Setting addon metrics-server=true in "no-preload-118016"
	I0804 00:20:24.692086   65087 host.go:66] Checking if "no-preload-118016" exists ...
	W0804 00:20:24.692099   65087 addons.go:243] addon metrics-server should already be in state true
	I0804 00:20:24.692142   65087 host.go:66] Checking if "no-preload-118016" exists ...
	I0804 00:20:24.692440   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692464   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.692494   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692441   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.692517   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.692566   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.693590   65087 out.go:177] * Verifying Kubernetes components...
	I0804 00:20:24.695139   65087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:20:24.708841   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0804 00:20:24.709324   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.709911   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.709937   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.710305   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.710594   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.712827   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0804 00:20:24.712894   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0804 00:20:24.713349   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.713375   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.713884   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.713899   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.713923   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.713942   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.714211   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.714264   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.714421   65087 addons.go:234] Setting addon default-storageclass=true in "no-preload-118016"
	W0804 00:20:24.714440   65087 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:20:24.714468   65087 host.go:66] Checking if "no-preload-118016" exists ...
	I0804 00:20:24.714605   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.714623   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.714801   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.714846   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.714993   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.715014   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.730476   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0804 00:20:24.730811   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36995
	I0804 00:20:24.730912   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.731145   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.731470   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.731486   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.731733   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.731748   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.731808   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.732034   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.732123   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.732294   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.733677   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0804 00:20:24.734185   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.734257   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.734306   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.734689   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.734709   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.735090   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.735618   65087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-9607/.minikube/bin/docker-machine-driver-kvm2
	I0804 00:20:24.735643   65087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:20:24.736977   65087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:20:24.736979   65087 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0804 00:20:22.853589   64502 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0804 00:20:22.859439   64502 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0804 00:20:22.860503   64502 api_server.go:141] control plane version: v1.30.3
	I0804 00:20:22.860521   64502 api_server.go:131] duration metric: took 4.067223392s to wait for apiserver health ...
	I0804 00:20:22.860528   64502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:22.860550   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:20:22.860598   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:20:22.901174   64502 cri.go:89] found id: "d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:22.901193   64502 cri.go:89] found id: ""
	I0804 00:20:22.901200   64502 logs.go:276] 1 containers: [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163]
	I0804 00:20:22.901246   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.905319   64502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:20:22.905404   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:20:22.948354   64502 cri.go:89] found id: "7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:22.948378   64502 cri.go:89] found id: ""
	I0804 00:20:22.948387   64502 logs.go:276] 1 containers: [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc]
	I0804 00:20:22.948438   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.952776   64502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:20:22.952863   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:20:22.989339   64502 cri.go:89] found id: "102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:22.989376   64502 cri.go:89] found id: ""
	I0804 00:20:22.989385   64502 logs.go:276] 1 containers: [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c]
	I0804 00:20:22.989443   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:22.993833   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:20:22.993909   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:20:23.035367   64502 cri.go:89] found id: "5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:23.035385   64502 cri.go:89] found id: ""
	I0804 00:20:23.035392   64502 logs.go:276] 1 containers: [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac]
	I0804 00:20:23.035434   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.040184   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:20:23.040259   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:20:23.078508   64502 cri.go:89] found id: "08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:23.078529   64502 cri.go:89] found id: ""
	I0804 00:20:23.078538   64502 logs.go:276] 1 containers: [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b]
	I0804 00:20:23.078601   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.082907   64502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:20:23.082969   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:20:23.120846   64502 cri.go:89] found id: "d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:23.120870   64502 cri.go:89] found id: ""
	I0804 00:20:23.120880   64502 logs.go:276] 1 containers: [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12]
	I0804 00:20:23.120943   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.125641   64502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:20:23.125702   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:20:23.172188   64502 cri.go:89] found id: ""
	I0804 00:20:23.172212   64502 logs.go:276] 0 containers: []
	W0804 00:20:23.172224   64502 logs.go:278] No container was found matching "kindnet"
	I0804 00:20:23.172232   64502 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0804 00:20:23.172297   64502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0804 00:20:23.218188   64502 cri.go:89] found id: "5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:23.218207   64502 cri.go:89] found id: "b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:23.218211   64502 cri.go:89] found id: ""
	I0804 00:20:23.218217   64502 logs.go:276] 2 containers: [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c]
	I0804 00:20:23.218268   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.222562   64502 ssh_runner.go:195] Run: which crictl
	I0804 00:20:23.226965   64502 logs.go:123] Gathering logs for etcd [7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc] ...
	I0804 00:20:23.226989   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7327ad855d4f6f8c6924ba4fe59928e386a0ec297465ce8bd39534e6757814fc"
	I0804 00:20:23.269384   64502 logs.go:123] Gathering logs for kube-proxy [08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b] ...
	I0804 00:20:23.269414   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08432bdee33dc6be2735cfbfce076fd83cc88ea31828bc15a2f458605e7eeb9b"
	I0804 00:20:23.309148   64502 logs.go:123] Gathering logs for storage-provisioner [5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca] ...
	I0804 00:20:23.309178   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5820e4bb2538f1b910846bb2361cc0c0d3da637afdda543b317e332685c975ca"
	I0804 00:20:23.356908   64502 logs.go:123] Gathering logs for storage-provisioner [b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c] ...
	I0804 00:20:23.356936   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4591fddfa08b4ceffc4cd79883effbc40b990fc4d43e0bf10ed25c52bd7a11c"
	I0804 00:20:23.395352   64502 logs.go:123] Gathering logs for container status ...
	I0804 00:20:23.395381   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:20:23.450901   64502 logs.go:123] Gathering logs for kube-scheduler [5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac] ...
	I0804 00:20:23.450925   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdb842231bc70aef03e03b969831f7b0d69ca13ac8bf3dff32d45211ac126ac"
	I0804 00:20:23.488908   64502 logs.go:123] Gathering logs for kube-controller-manager [d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12] ...
	I0804 00:20:23.488945   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7780d9d7ff2feb0b64a73620c4d8e746e626e6f2e19517059dde810df5ecf12"
	I0804 00:20:23.551780   64502 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:20:23.551808   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:20:23.975030   64502 logs.go:123] Gathering logs for kubelet ...
	I0804 00:20:23.975070   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:20:24.035464   64502 logs.go:123] Gathering logs for dmesg ...
	I0804 00:20:24.035497   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:20:24.053118   64502 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:20:24.053148   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:20:24.197157   64502 logs.go:123] Gathering logs for kube-apiserver [d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163] ...
	I0804 00:20:24.197189   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d044ac1fa318f2e4f1aaad0d9c51df9ee673506186d7ac41fae3839597442163"
	I0804 00:20:24.254049   64502 logs.go:123] Gathering logs for coredns [102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c] ...
	I0804 00:20:24.254083   64502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102bbb96ee07ab5ecc692e75f377b0663438cdc2223146035f9b438c0c6b0b3c"
	I0804 00:20:24.738735   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:20:24.738757   65087 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:20:24.738785   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.738836   65087 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:20:24.738847   65087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:20:24.738860   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.742131   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742459   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742539   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.742569   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.742690   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.742968   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.743009   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.743254   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.743142   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.743174   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.743387   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.743447   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.743590   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.743720   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.754983   65087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0804 00:20:24.755436   65087 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:20:24.755877   65087 main.go:141] libmachine: Using API Version  1
	I0804 00:20:24.755901   65087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:20:24.756229   65087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:20:24.756454   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetState
	I0804 00:20:24.758285   65087 main.go:141] libmachine: (no-preload-118016) Calling .DriverName
	I0804 00:20:24.758520   65087 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:20:24.758537   65087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:20:24.758555   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHHostname
	I0804 00:20:24.761268   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.761715   65087 main.go:141] libmachine: (no-preload-118016) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:41:20", ip: ""} in network mk-no-preload-118016: {Iface:virbr3 ExpiryTime:2024-08-04 01:06:02 +0000 UTC Type:0 Mac:52:54:00:be:41:20 Iaid: IPaddr:192.168.61.137 Prefix:24 Hostname:no-preload-118016 Clientid:01:52:54:00:be:41:20}
	I0804 00:20:24.761739   65087 main.go:141] libmachine: (no-preload-118016) DBG | domain no-preload-118016 has defined IP address 192.168.61.137 and MAC address 52:54:00:be:41:20 in network mk-no-preload-118016
	I0804 00:20:24.762001   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHPort
	I0804 00:20:24.762211   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHKeyPath
	I0804 00:20:24.762402   65087 main.go:141] libmachine: (no-preload-118016) Calling .GetSSHUsername
	I0804 00:20:24.762593   65087 sshutil.go:53] new ssh client: &{IP:192.168.61.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/no-preload-118016/id_rsa Username:docker}
	I0804 00:20:24.942323   65087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:20:24.971293   65087 node_ready.go:35] waiting up to 6m0s for node "no-preload-118016" to be "Ready" ...
	I0804 00:20:24.991406   65087 node_ready.go:49] node "no-preload-118016" has status "Ready":"True"
	I0804 00:20:24.991428   65087 node_ready.go:38] duration metric: took 20.101499ms for node "no-preload-118016" to be "Ready" ...
	I0804 00:20:24.991436   65087 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:25.004484   65087 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:25.069407   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:20:25.069437   65087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0804 00:20:25.093645   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:20:25.178590   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:20:25.178615   65087 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:20:25.246634   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:20:25.272880   65087 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:20:25.272916   65087 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:20:25.368517   65087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:20:25.442382   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.442406   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.442668   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:25.442711   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.442717   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.442726   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.442732   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.444425   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.444456   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.444463   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:25.451275   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:25.451298   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:25.451605   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:25.451620   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:25.451617   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218075   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.218105   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.218391   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.218416   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.218427   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.218435   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.218440   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218702   65087 main.go:141] libmachine: (no-preload-118016) DBG | Closing plugin on server side
	I0804 00:20:26.218764   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.218786   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.671629   65087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.303057537s)
	I0804 00:20:26.671683   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.671702   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.672010   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.672031   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.672041   65087 main.go:141] libmachine: Making call to close driver server
	I0804 00:20:26.672049   65087 main.go:141] libmachine: (no-preload-118016) Calling .Close
	I0804 00:20:26.672327   65087 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:20:26.672363   65087 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:20:26.672378   65087 addons.go:475] Verifying addon metrics-server=true in "no-preload-118016"
	I0804 00:20:26.674374   65087 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0804 00:20:26.803868   64502 system_pods.go:59] 8 kube-system pods found
	I0804 00:20:26.803909   64502 system_pods.go:61] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running
	I0804 00:20:26.803917   64502 system_pods.go:61] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running
	I0804 00:20:26.803923   64502 system_pods.go:61] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running
	I0804 00:20:26.803928   64502 system_pods.go:61] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running
	I0804 00:20:26.803934   64502 system_pods.go:61] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:20:26.803940   64502 system_pods.go:61] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running
	I0804 00:20:26.803948   64502 system_pods.go:61] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:26.803957   64502 system_pods.go:61] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:20:26.803966   64502 system_pods.go:74] duration metric: took 3.943432992s to wait for pod list to return data ...
	I0804 00:20:26.803978   64502 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:26.808760   64502 default_sa.go:45] found service account: "default"
	I0804 00:20:26.808786   64502 default_sa.go:55] duration metric: took 4.797226ms for default service account to be created ...
	I0804 00:20:26.808796   64502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:26.814721   64502 system_pods.go:86] 8 kube-system pods found
	I0804 00:20:26.814753   64502 system_pods.go:89] "coredns-7db6d8ff4d-7gbcf" [9bf46b6f-da6d-4d8a-9b91-6c11f5225072] Running
	I0804 00:20:26.814761   64502 system_pods.go:89] "etcd-embed-certs-877598" [41ec13a5-2d12-4a63-b906-22dc6c51e065] Running
	I0804 00:20:26.814768   64502 system_pods.go:89] "kube-apiserver-embed-certs-877598" [5a1953fd-df24-48f2-8634-41b1bd7a7e66] Running
	I0804 00:20:26.814774   64502 system_pods.go:89] "kube-controller-manager-embed-certs-877598" [8429892d-c994-4b07-badd-765e977ad214] Running
	I0804 00:20:26.814780   64502 system_pods.go:89] "kube-proxy-wk8zf" [2637a235-d0b5-46f3-bbad-ac7386ce61c7] Running
	I0804 00:20:26.814787   64502 system_pods.go:89] "kube-scheduler-embed-certs-877598" [eea6b719-0930-4866-8e01-ea7859f2ffc6] Running
	I0804 00:20:26.814798   64502 system_pods.go:89] "metrics-server-569cc877fc-hbcm9" [de6ad720-ed0c-41ea-a1b4-716443257d7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:26.814807   64502 system_pods.go:89] "storage-provisioner" [373a00e8-1604-4d33-a4aa-95d3a0caf930] Running
	I0804 00:20:26.814819   64502 system_pods.go:126] duration metric: took 6.01558ms to wait for k8s-apps to be running ...
	I0804 00:20:26.814828   64502 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:26.814894   64502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:26.837462   64502 system_svc.go:56] duration metric: took 22.624089ms WaitForService to wait for kubelet
	I0804 00:20:26.837494   64502 kubeadm.go:582] duration metric: took 4m23.086636256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:26.837522   64502 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:26.841517   64502 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:26.841548   64502 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:26.841563   64502 node_conditions.go:105] duration metric: took 4.034693ms to run NodePressure ...
	I0804 00:20:26.841576   64502 start.go:241] waiting for startup goroutines ...
	I0804 00:20:26.841586   64502 start.go:246] waiting for cluster config update ...
	I0804 00:20:26.841600   64502 start.go:255] writing updated cluster config ...
	I0804 00:20:26.841939   64502 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:26.908142   64502 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:20:26.910191   64502 out.go:177] * Done! kubectl is now configured to use "embed-certs-877598" cluster and "default" namespace by default
	I0804 00:20:26.675679   65087 addons.go:510] duration metric: took 1.98382947s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0804 00:20:27.012226   65087 pod_ready.go:102] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:29.511886   65087 pod_ready.go:102] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"False"
	I0804 00:20:32.011000   65087 pod_ready.go:92] pod "etcd-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:32.011021   65087 pod_ready.go:81] duration metric: took 7.006508451s for pod "etcd-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:32.011031   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.518235   65087 pod_ready.go:92] pod "kube-apiserver-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.518260   65087 pod_ready.go:81] duration metric: took 1.507219524s for pod "kube-apiserver-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.518270   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.522894   65087 pod_ready.go:92] pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.522916   65087 pod_ready.go:81] duration metric: took 4.639763ms for pod "kube-controller-manager-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.522928   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4jqng" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.527271   65087 pod_ready.go:92] pod "kube-proxy-4jqng" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.527291   65087 pod_ready.go:81] duration metric: took 4.353851ms for pod "kube-proxy-4jqng" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.527303   65087 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.531405   65087 pod_ready.go:92] pod "kube-scheduler-no-preload-118016" in "kube-system" namespace has status "Ready":"True"
	I0804 00:20:33.531424   65087 pod_ready.go:81] duration metric: took 4.113418ms for pod "kube-scheduler-no-preload-118016" in "kube-system" namespace to be "Ready" ...
	I0804 00:20:33.531433   65087 pod_ready.go:38] duration metric: took 8.539987559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:20:33.531449   65087 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:20:33.531505   65087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:20:33.546783   65087 api_server.go:72] duration metric: took 8.854972636s to wait for apiserver process to appear ...
	I0804 00:20:33.546813   65087 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:20:33.546832   65087 api_server.go:253] Checking apiserver healthz at https://192.168.61.137:8443/healthz ...
	I0804 00:20:33.551131   65087 api_server.go:279] https://192.168.61.137:8443/healthz returned 200:
	ok
	I0804 00:20:33.552092   65087 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:20:33.552112   65087 api_server.go:131] duration metric: took 5.292367ms to wait for apiserver health ...
	I0804 00:20:33.552119   65087 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:20:33.557965   65087 system_pods.go:59] 9 kube-system pods found
	I0804 00:20:33.557987   65087 system_pods.go:61] "coredns-6f6b679f8f-gg97s" [28bfbbe9-5051-4674-8b43-f07bfdbc6916] Running
	I0804 00:20:33.557995   65087 system_pods.go:61] "coredns-6f6b679f8f-lj494" [74baae1c-e4c4-4125-aa9d-aeaac74a6ecd] Running
	I0804 00:20:33.558000   65087 system_pods.go:61] "etcd-no-preload-118016" [19ff6386-b0c0-41f7-89fa-fd62e8698b05] Running
	I0804 00:20:33.558005   65087 system_pods.go:61] "kube-apiserver-no-preload-118016" [d791bfcb-00d1-47b8-a9c2-ac8e68af4062] Running
	I0804 00:20:33.558009   65087 system_pods.go:61] "kube-controller-manager-no-preload-118016" [cef9e6fa-7a9d-4d84-8693-216d2eeab428] Running
	I0804 00:20:33.558014   65087 system_pods.go:61] "kube-proxy-4jqng" [c254599f-e58d-4d0a-81c9-1c98c0341f26] Running
	I0804 00:20:33.558018   65087 system_pods.go:61] "kube-scheduler-no-preload-118016" [0deea66f-2336-4371-9492-5af84f3f0fe8] Running
	I0804 00:20:33.558026   65087 system_pods.go:61] "metrics-server-6867b74b74-9gw27" [2f3cdf21-9e68-49b9-a6e0-927465738f23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:33.558035   65087 system_pods.go:61] "storage-provisioner" [07fdb5fa-a2e9-4d3d-8149-25720c320d51] Running
	I0804 00:20:33.558045   65087 system_pods.go:74] duration metric: took 5.921154ms to wait for pod list to return data ...
	I0804 00:20:33.558057   65087 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:20:33.608139   65087 default_sa.go:45] found service account: "default"
	I0804 00:20:33.608164   65087 default_sa.go:55] duration metric: took 50.097877ms for default service account to be created ...
	I0804 00:20:33.608174   65087 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:20:33.811878   65087 system_pods.go:86] 9 kube-system pods found
	I0804 00:20:33.811906   65087 system_pods.go:89] "coredns-6f6b679f8f-gg97s" [28bfbbe9-5051-4674-8b43-f07bfdbc6916] Running
	I0804 00:20:33.811912   65087 system_pods.go:89] "coredns-6f6b679f8f-lj494" [74baae1c-e4c4-4125-aa9d-aeaac74a6ecd] Running
	I0804 00:20:33.811916   65087 system_pods.go:89] "etcd-no-preload-118016" [19ff6386-b0c0-41f7-89fa-fd62e8698b05] Running
	I0804 00:20:33.811920   65087 system_pods.go:89] "kube-apiserver-no-preload-118016" [d791bfcb-00d1-47b8-a9c2-ac8e68af4062] Running
	I0804 00:20:33.811925   65087 system_pods.go:89] "kube-controller-manager-no-preload-118016" [cef9e6fa-7a9d-4d84-8693-216d2eeab428] Running
	I0804 00:20:33.811928   65087 system_pods.go:89] "kube-proxy-4jqng" [c254599f-e58d-4d0a-81c9-1c98c0341f26] Running
	I0804 00:20:33.811932   65087 system_pods.go:89] "kube-scheduler-no-preload-118016" [0deea66f-2336-4371-9492-5af84f3f0fe8] Running
	I0804 00:20:33.811939   65087 system_pods.go:89] "metrics-server-6867b74b74-9gw27" [2f3cdf21-9e68-49b9-a6e0-927465738f23] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0804 00:20:33.811943   65087 system_pods.go:89] "storage-provisioner" [07fdb5fa-a2e9-4d3d-8149-25720c320d51] Running
	I0804 00:20:33.811950   65087 system_pods.go:126] duration metric: took 203.770479ms to wait for k8s-apps to be running ...
	I0804 00:20:33.811957   65087 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:20:33.812000   65087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:20:33.827146   65087 system_svc.go:56] duration metric: took 15.17867ms WaitForService to wait for kubelet
	I0804 00:20:33.827176   65087 kubeadm.go:582] duration metric: took 9.135367695s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:20:33.827199   65087 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:20:34.009032   65087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:20:34.009056   65087 node_conditions.go:123] node cpu capacity is 2
	I0804 00:20:34.009076   65087 node_conditions.go:105] duration metric: took 181.872031ms to run NodePressure ...
	I0804 00:20:34.009086   65087 start.go:241] waiting for startup goroutines ...
	I0804 00:20:34.009112   65087 start.go:246] waiting for cluster config update ...
	I0804 00:20:34.009128   65087 start.go:255] writing updated cluster config ...
	I0804 00:20:34.009423   65087 ssh_runner.go:195] Run: rm -f paused
	I0804 00:20:34.059796   65087 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0804 00:20:34.061903   65087 out.go:177] * Done! kubectl is now configured to use "no-preload-118016" cluster and "default" namespace by default
	I0804 00:21:00.664979   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:21:00.665100   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:21:00.666810   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:00.666904   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:00.667020   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:00.667150   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:00.667278   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:00.667370   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:00.670254   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:00.670337   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:00.670431   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:00.670537   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:00.670623   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:00.670721   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:00.670788   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:00.670883   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:00.670990   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:00.671079   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:00.671168   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:00.671217   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:00.671265   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:00.671359   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:00.671442   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:00.671529   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:00.671611   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:00.671756   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:00.671856   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:00.671888   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:00.671940   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:00.673410   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:00.673506   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:00.673573   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:00.673627   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:00.673692   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:00.673828   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:00.673876   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:00.673972   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674207   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674283   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674517   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674590   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.674752   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.674851   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675053   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675173   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:00.675451   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:00.675463   64758 kubeadm.go:310] 
	I0804 00:21:00.675511   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:21:00.675561   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:21:00.675571   64758 kubeadm.go:310] 
	I0804 00:21:00.675614   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:21:00.675656   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:21:00.675787   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:21:00.675797   64758 kubeadm.go:310] 
	I0804 00:21:00.675928   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:21:00.675970   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:21:00.676009   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:21:00.676026   64758 kubeadm.go:310] 
	I0804 00:21:00.676172   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:21:00.676278   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:21:00.676289   64758 kubeadm.go:310] 
	I0804 00:21:00.676393   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:21:00.676466   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:21:00.676532   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:21:00.676609   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:21:00.676632   64758 kubeadm.go:310] 
	W0804 00:21:00.676723   64758 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 00:21:00.676765   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:21:01.138781   64758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:21:01.154405   64758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:21:01.164426   64758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:21:01.164445   64758 kubeadm.go:157] found existing configuration files:
	
	I0804 00:21:01.164496   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:21:01.173853   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:21:01.173907   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:21:01.183634   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:21:01.193283   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:21:01.193342   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:21:01.202427   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.212186   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:21:01.212235   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:21:01.222754   64758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:21:01.232996   64758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:21:01.233059   64758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:21:01.243778   64758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:21:01.319895   64758 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:21:01.319975   64758 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:21:01.474907   64758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:21:01.475029   64758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:21:01.475119   64758 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:21:01.683624   64758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:21:01.685482   64758 out.go:204]   - Generating certificates and keys ...
	I0804 00:21:01.685584   64758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:21:01.685691   64758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:21:01.685792   64758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:21:01.685880   64758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:21:01.685991   64758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:21:01.686067   64758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:21:01.686147   64758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:21:01.686285   64758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:21:01.686399   64758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:21:01.686513   64758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:21:01.686600   64758 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:21:01.686670   64758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:21:01.985613   64758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:21:02.088377   64758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:21:02.336621   64758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:21:02.448654   64758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:21:02.470140   64758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:21:02.471390   64758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:21:02.471456   64758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:21:02.610840   64758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:21:02.612641   64758 out.go:204]   - Booting up control plane ...
	I0804 00:21:02.612744   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:21:02.627044   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:21:02.629019   64758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:21:02.630430   64758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:21:02.633022   64758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:21:42.635581   64758 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:21:42.635740   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:42.636036   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:47.636656   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:47.636879   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:21:57.637900   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:21:57.638098   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:17.638425   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:17.638634   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637807   64758 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:22:57.637988   64758 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:22:57.637996   64758 kubeadm.go:310] 
	I0804 00:22:57.638035   64758 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:22:57.638079   64758 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:22:57.638085   64758 kubeadm.go:310] 
	I0804 00:22:57.638118   64758 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:22:57.638148   64758 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:22:57.638288   64758 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:22:57.638309   64758 kubeadm.go:310] 
	I0804 00:22:57.638426   64758 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:22:57.638507   64758 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:22:57.638619   64758 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:22:57.638640   64758 kubeadm.go:310] 
	I0804 00:22:57.638829   64758 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:22:57.638944   64758 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:22:57.638959   64758 kubeadm.go:310] 
	I0804 00:22:57.639107   64758 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:22:57.639191   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:22:57.639300   64758 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:22:57.639399   64758 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:22:57.639412   64758 kubeadm.go:310] 
	I0804 00:22:57.639782   64758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:22:57.639904   64758 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:22:57.640012   64758 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:22:57.640091   64758 kubeadm.go:394] duration metric: took 8m3.172057529s to StartCluster
	I0804 00:22:57.640138   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:22:57.640202   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:22:57.684020   64758 cri.go:89] found id: ""
	I0804 00:22:57.684054   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.684064   64758 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:22:57.684072   64758 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:22:57.684134   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:22:57.722756   64758 cri.go:89] found id: ""
	I0804 00:22:57.722780   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.722788   64758 logs.go:278] No container was found matching "etcd"
	I0804 00:22:57.722793   64758 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:22:57.722851   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:22:57.760371   64758 cri.go:89] found id: ""
	I0804 00:22:57.760400   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.760412   64758 logs.go:278] No container was found matching "coredns"
	I0804 00:22:57.760419   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:22:57.760476   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:22:57.796114   64758 cri.go:89] found id: ""
	I0804 00:22:57.796144   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.796155   64758 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:22:57.796162   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:22:57.796211   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:22:57.842148   64758 cri.go:89] found id: ""
	I0804 00:22:57.842179   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.842191   64758 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:22:57.842198   64758 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:22:57.842286   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:22:57.914193   64758 cri.go:89] found id: ""
	I0804 00:22:57.914218   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.914229   64758 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:22:57.914236   64758 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:22:57.914290   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:22:57.965944   64758 cri.go:89] found id: ""
	I0804 00:22:57.965973   64758 logs.go:276] 0 containers: []
	W0804 00:22:57.965984   64758 logs.go:278] No container was found matching "kindnet"
	I0804 00:22:57.965991   64758 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0804 00:22:57.966063   64758 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0804 00:22:58.003016   64758 cri.go:89] found id: ""
	I0804 00:22:58.003044   64758 logs.go:276] 0 containers: []
	W0804 00:22:58.003055   64758 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0804 00:22:58.003066   64758 logs.go:123] Gathering logs for dmesg ...
	I0804 00:22:58.003093   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:22:58.017277   64758 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:22:58.017304   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:22:58.094192   64758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:22:58.094214   64758 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:22:58.094227   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:22:58.210901   64758 logs.go:123] Gathering logs for container status ...
	I0804 00:22:58.210944   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:22:58.249283   64758 logs.go:123] Gathering logs for kubelet ...
	I0804 00:22:58.249317   64758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 00:22:58.300998   64758 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0804 00:22:58.301054   64758 out.go:239] * 
	W0804 00:22:58.301115   64758 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.301137   64758 out.go:239] * 
	W0804 00:22:58.301978   64758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:22:58.305305   64758 out.go:177] 
	W0804 00:22:58.306722   64758 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:22:58.306816   64758 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0804 00:22:58.306848   64758 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0804 00:22:58.308372   64758 out.go:177] 
	
	
	==> CRI-O <==
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.237180836Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731626237151092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b210e53-3032-404b-b56d-f2e551bc95b7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.237956110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7771d48-665e-4c0f-99a3-e92201e7615e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.238020965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7771d48-665e-4c0f-99a3-e92201e7615e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.238323356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a7771d48-665e-4c0f-99a3-e92201e7615e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.275949826Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f952a699-330c-48ff-bbd6-abbb5e226a9e name=/runtime.v1.RuntimeService/Version
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.276075569Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f952a699-330c-48ff-bbd6-abbb5e226a9e name=/runtime.v1.RuntimeService/Version
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.277634628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0676277c-0079-452c-a929-bc02940aec2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.278187557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731626278158988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0676277c-0079-452c-a929-bc02940aec2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.278920748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b53d8db5-2221-450f-8e58-82ccaf0916be name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.278980981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b53d8db5-2221-450f-8e58-82ccaf0916be name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.279040916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b53d8db5-2221-450f-8e58-82ccaf0916be name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.313777547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3844c220-6da5-4116-ad77-29eaf41e1793 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.313877097Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3844c220-6da5-4116-ad77-29eaf41e1793 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.315285181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe859e04-49b7-4fa5-8287-df486e53ece1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.315785348Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731626315758555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe859e04-49b7-4fa5-8287-df486e53ece1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.316450114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6027ecd-c7fa-4d68-96ca-96a51e3ee231 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.316525367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6027ecd-c7fa-4d68-96ca-96a51e3ee231 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.316572945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b6027ecd-c7fa-4d68-96ca-96a51e3ee231 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.349880859Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4829028-dad9-4d89-ab4a-c6cac8af091b name=/runtime.v1.RuntimeService/Version
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.349957804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4829028-dad9-4d89-ab4a-c6cac8af091b name=/runtime.v1.RuntimeService/Version
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.351265577Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45a465d8-55a2-43d9-8f75-c84b23a910b0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.351637183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731626351618263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45a465d8-55a2-43d9-8f75-c84b23a910b0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.352453485Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fcadf5a-ee1a-4c58-a0db-6bc0b5cb3938 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.352500440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fcadf5a-ee1a-4c58-a0db-6bc0b5cb3938 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:33:46 old-k8s-version-576210 crio[653]: time="2024-08-04 00:33:46.352539265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6fcadf5a-ee1a-4c58-a0db-6bc0b5cb3938 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 4 00:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050227] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041126] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.789171] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.600311] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.566673] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.215618] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.062656] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049621] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.191384] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.139006] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.271189] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +6.294398] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.066429] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.776417] systemd-fstab-generator[966]: Ignoring "noauto" option for root device
	[Aug 4 00:15] kauditd_printk_skb: 46 callbacks suppressed
	[Aug 4 00:19] systemd-fstab-generator[5026]: Ignoring "noauto" option for root device
	[Aug 4 00:21] systemd-fstab-generator[5298]: Ignoring "noauto" option for root device
	[  +0.071111] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:33:46 up 19 min,  0 users,  load average: 0.07, 0.03, 0.03
	Linux old-k8s-version-576210 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0009456f0)
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00099fef0, 0x4f0ac20, 0xc0009600a0, 0x1, 0xc0001000c0)
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0005bec40, 0xc0001000c0)
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0008f8970, 0xc000954600)
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 04 00:33:41 old-k8s-version-576210 kubelet[6704]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 04 00:33:41 old-k8s-version-576210 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 04 00:33:41 old-k8s-version-576210 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 04 00:33:42 old-k8s-version-576210 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 132.
	Aug 04 00:33:42 old-k8s-version-576210 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 04 00:33:42 old-k8s-version-576210 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 04 00:33:42 old-k8s-version-576210 kubelet[6713]: I0804 00:33:42.637639    6713 server.go:416] Version: v1.20.0
	Aug 04 00:33:42 old-k8s-version-576210 kubelet[6713]: I0804 00:33:42.637974    6713 server.go:837] Client rotation is on, will bootstrap in background
	Aug 04 00:33:42 old-k8s-version-576210 kubelet[6713]: I0804 00:33:42.640573    6713 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 04 00:33:42 old-k8s-version-576210 kubelet[6713]: I0804 00:33:42.642059    6713 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 04 00:33:42 old-k8s-version-576210 kubelet[6713]: W0804 00:33:42.642359    6713 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-576210 -n old-k8s-version-576210
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 2 (217.456433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-576210" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (102.23s)

                                                
                                    

Test pass (250/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 59.09
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 16.62
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-rc.0/json-events 50.53
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.56
31 TestOffline 104.52
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 212.7
40 TestAddons/serial/GCPAuth/Namespaces 2.47
42 TestAddons/parallel/Registry 17.5
44 TestAddons/parallel/InspektorGadget 10.8
46 TestAddons/parallel/HelmTiller 12.35
48 TestAddons/parallel/CSI 79.44
49 TestAddons/parallel/Headlamp 22.15
50 TestAddons/parallel/CloudSpanner 5.81
51 TestAddons/parallel/LocalPath 57.31
52 TestAddons/parallel/NvidiaDevicePlugin 6.65
53 TestAddons/parallel/Yakd 11.89
55 TestCertOptions 47.53
56 TestCertExpiration 362.57
58 TestForceSystemdFlag 69.09
59 TestForceSystemdEnv 47.73
61 TestKVMDriverInstallOrUpdate 8
65 TestErrorSpam/setup 42.89
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.72
68 TestErrorSpam/pause 1.55
69 TestErrorSpam/unpause 1.58
70 TestErrorSpam/stop 5.33
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 95.24
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 41.72
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.12
82 TestFunctional/serial/CacheCmd/cache/add_local 2.21
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 29.72
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.46
93 TestFunctional/serial/LogsFileCmd 1.45
94 TestFunctional/serial/InvalidService 4.7
96 TestFunctional/parallel/ConfigCmd 0.3
97 TestFunctional/parallel/DashboardCmd 19.03
98 TestFunctional/parallel/DryRun 0.27
99 TestFunctional/parallel/InternationalLanguage 0.13
100 TestFunctional/parallel/StatusCmd 0.76
104 TestFunctional/parallel/ServiceCmdConnect 23.52
105 TestFunctional/parallel/AddonsCmd 0.12
106 TestFunctional/parallel/PersistentVolumeClaim 47.03
108 TestFunctional/parallel/SSHCmd 0.42
109 TestFunctional/parallel/CpCmd 1.31
110 TestFunctional/parallel/MySQL 26.03
111 TestFunctional/parallel/FileSync 0.2
112 TestFunctional/parallel/CertSync 1.29
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
120 TestFunctional/parallel/License 0.66
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.45
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
127 TestFunctional/parallel/ImageCommands/ImageBuild 4.62
128 TestFunctional/parallel/ImageCommands/Setup 1.96
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
142 TestFunctional/parallel/ProfileCmd/profile_list 0.28
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.75
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.33
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.18
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.89
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.26
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.68
151 TestFunctional/parallel/ServiceCmd/DeployApp 8.18
152 TestFunctional/parallel/MountCmd/any-port 8.85
153 TestFunctional/parallel/ServiceCmd/List 0.44
154 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
155 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
156 TestFunctional/parallel/ServiceCmd/Format 0.35
157 TestFunctional/parallel/ServiceCmd/URL 0.32
158 TestFunctional/parallel/MountCmd/specific-port 1.76
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.35
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 277
167 TestMultiControlPlane/serial/DeployApp 6.11
168 TestMultiControlPlane/serial/PingHostFromPods 1.27
169 TestMultiControlPlane/serial/AddWorkerNode 58.52
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
172 TestMultiControlPlane/serial/CopyFile 12.75
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.19
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
181 TestMultiControlPlane/serial/RestartCluster 345.94
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
183 TestMultiControlPlane/serial/AddSecondaryNode 81.95
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
188 TestJSONOutput/start/Command 60.8
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.74
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.66
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.34
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 88.74
220 TestMountStart/serial/StartWithMountFirst 28.11
221 TestMountStart/serial/VerifyMountFirst 0.37
222 TestMountStart/serial/StartWithMountSecond 28.49
223 TestMountStart/serial/VerifyMountSecond 0.38
224 TestMountStart/serial/DeleteFirst 0.66
225 TestMountStart/serial/VerifyMountPostDelete 0.38
226 TestMountStart/serial/Stop 1.29
227 TestMountStart/serial/RestartStopped 22.82
228 TestMountStart/serial/VerifyMountPostStop 0.36
231 TestMultiNode/serial/FreshStart2Nodes 131.44
232 TestMultiNode/serial/DeployApp2Nodes 5.32
233 TestMultiNode/serial/PingHostFrom2Pods 0.78
234 TestMultiNode/serial/AddNode 51.63
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.22
237 TestMultiNode/serial/CopyFile 7.04
238 TestMultiNode/serial/StopNode 2.28
239 TestMultiNode/serial/StartAfterStop 40.34
241 TestMultiNode/serial/DeleteNode 2.17
243 TestMultiNode/serial/RestartMultiNode 191.36
244 TestMultiNode/serial/ValidateNameConflict 44.68
251 TestScheduledStopUnix 115.13
255 TestRunningBinaryUpgrade 225.12
260 TestPause/serial/Start 128.96
261 TestStoppedBinaryUpgrade/Setup 2.62
262 TestStoppedBinaryUpgrade/Upgrade 117.56
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
274 TestNoKubernetes/serial/StartWithK8s 48.38
282 TestNetworkPlugins/group/false 3.04
288 TestNoKubernetes/serial/StartWithStopK8s 42.02
289 TestNoKubernetes/serial/Start 26.52
290 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
291 TestNoKubernetes/serial/ProfileList 15.26
292 TestNoKubernetes/serial/Stop 1.28
293 TestNoKubernetes/serial/StartNoArgs 25.12
295 TestStartStop/group/no-preload/serial/FirstStart 150.67
296 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
298 TestStartStop/group/embed-certs/serial/FirstStart 77.52
299 TestStartStop/group/embed-certs/serial/DeployApp 10.28
300 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
303 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 99.5
304 TestStartStop/group/no-preload/serial/DeployApp 11.3
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
313 TestStartStop/group/embed-certs/serial/SecondStart 636.49
314 TestStartStop/group/old-k8s-version/serial/Stop 2.59
315 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
318 TestStartStop/group/no-preload/serial/SecondStart 572.44
320 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 494.71
330 TestStartStop/group/newest-cni/serial/FirstStart 53.85
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
333 TestStartStop/group/newest-cni/serial/Stop 10.47
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
335 TestStartStop/group/newest-cni/serial/SecondStart 42.01
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 1.1
339 TestStartStop/group/newest-cni/serial/Pause 2.89
340 TestNetworkPlugins/group/auto/Start 102.06
341 TestNetworkPlugins/group/kindnet/Start 94.54
342 TestNetworkPlugins/group/calico/Start 130.64
343 TestNetworkPlugins/group/auto/KubeletFlags 0.21
344 TestNetworkPlugins/group/auto/NetCatPod 11.23
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
347 TestNetworkPlugins/group/kindnet/NetCatPod 10.3
348 TestNetworkPlugins/group/custom-flannel/Start 83.37
349 TestNetworkPlugins/group/auto/DNS 0.26
350 TestNetworkPlugins/group/auto/Localhost 0.18
351 TestNetworkPlugins/group/auto/HairPin 0.18
352 TestNetworkPlugins/group/kindnet/DNS 0.19
353 TestNetworkPlugins/group/kindnet/Localhost 0.16
354 TestNetworkPlugins/group/kindnet/HairPin 0.15
355 TestNetworkPlugins/group/enable-default-cni/Start 115.19
356 TestNetworkPlugins/group/flannel/Start 113.76
357 TestNetworkPlugins/group/calico/ControllerPod 6.08
358 TestNetworkPlugins/group/calico/KubeletFlags 0.36
359 TestNetworkPlugins/group/calico/NetCatPod 12.82
360 TestNetworkPlugins/group/calico/DNS 0.14
361 TestNetworkPlugins/group/calico/Localhost 0.12
362 TestNetworkPlugins/group/calico/HairPin 0.14
363 TestNetworkPlugins/group/bridge/Start 76.28
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.3
366 TestNetworkPlugins/group/custom-flannel/DNS 0.18
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
369 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
370 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.22
371 TestNetworkPlugins/group/flannel/ControllerPod 6.01
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
373 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
374 TestNetworkPlugins/group/flannel/NetCatPod 11.25
375 TestNetworkPlugins/group/bridge/NetCatPod 13.25
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
379 TestNetworkPlugins/group/flannel/DNS 0.18
380 TestNetworkPlugins/group/flannel/Localhost 0.15
381 TestNetworkPlugins/group/flannel/HairPin 0.13
382 TestNetworkPlugins/group/bridge/DNS 0.17
383 TestNetworkPlugins/group/bridge/Localhost 0.13
384 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (59.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-013110 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-013110 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (59.087871713s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (59.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-013110
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-013110: exit status 85 (57.508412ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-013110 | jenkins | v1.33.1 | 03 Aug 24 22:47 UTC |          |
	|         | -p download-only-013110        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 22:47:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 22:47:44.126860   16807 out.go:291] Setting OutFile to fd 1 ...
	I0803 22:47:44.126988   16807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:47:44.126996   16807 out.go:304] Setting ErrFile to fd 2...
	I0803 22:47:44.127000   16807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:47:44.127169   16807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	W0803 22:47:44.127282   16807 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19364-9607/.minikube/config/config.json: open /home/jenkins/minikube-integration/19364-9607/.minikube/config/config.json: no such file or directory
	I0803 22:47:44.127823   16807 out.go:298] Setting JSON to true
	I0803 22:47:44.128654   16807 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1808,"bootTime":1722723456,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 22:47:44.128707   16807 start.go:139] virtualization: kvm guest
	I0803 22:47:44.131018   16807 out.go:97] [download-only-013110] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0803 22:47:44.131117   16807 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball: no such file or directory
	I0803 22:47:44.131147   16807 notify.go:220] Checking for updates...
	I0803 22:47:44.132562   16807 out.go:169] MINIKUBE_LOCATION=19364
	I0803 22:47:44.133899   16807 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 22:47:44.135018   16807 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 22:47:44.136189   16807 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 22:47:44.137259   16807 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0803 22:47:44.139629   16807 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 22:47:44.139826   16807 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 22:47:44.238314   16807 out.go:97] Using the kvm2 driver based on user configuration
	I0803 22:47:44.238348   16807 start.go:297] selected driver: kvm2
	I0803 22:47:44.238354   16807 start.go:901] validating driver "kvm2" against <nil>
	I0803 22:47:44.238664   16807 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 22:47:44.238783   16807 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 22:47:44.253587   16807 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 22:47:44.253641   16807 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 22:47:44.254135   16807 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0803 22:47:44.254295   16807 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 22:47:44.254349   16807 cni.go:84] Creating CNI manager for ""
	I0803 22:47:44.254360   16807 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 22:47:44.254369   16807 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 22:47:44.254423   16807 start.go:340] cluster config:
	{Name:download-only-013110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-013110 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 22:47:44.254585   16807 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 22:47:44.256371   16807 out.go:97] Downloading VM boot image ...
	I0803 22:47:44.256401   16807 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19364-9607/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 22:47:55.333124   16807 out.go:97] Starting "download-only-013110" primary control-plane node in "download-only-013110" cluster
	I0803 22:47:55.333163   16807 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0803 22:47:55.444324   16807 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0803 22:47:55.444374   16807 cache.go:56] Caching tarball of preloaded images
	I0803 22:47:55.444534   16807 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0803 22:47:55.446379   16807 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0803 22:47:55.446401   16807 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0803 22:47:55.559290   16807 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0803 22:48:14.300817   16807 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0803 22:48:14.301912   16807 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0803 22:48:15.209988   16807 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0803 22:48:15.210323   16807 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/download-only-013110/config.json ...
	I0803 22:48:15.210353   16807 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/download-only-013110/config.json: {Name:mk6a65a3aa8085da8bd644de053122ed679dde0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:48:15.210524   16807 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0803 22:48:15.210700   16807 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-013110 host does not exist
	  To start a cluster, run: "minikube start -p download-only-013110"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-013110
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (16.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-312107 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-312107 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.616858509s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (16.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-312107
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-312107: exit status 85 (58.057799ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-013110 | jenkins | v1.33.1 | 03 Aug 24 22:47 UTC |                     |
	|         | -p download-only-013110        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| delete  | -p download-only-013110        | download-only-013110 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| start   | -o=json --download-only        | download-only-312107 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC |                     |
	|         | -p download-only-312107        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 22:48:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 22:48:43.529877   17189 out.go:291] Setting OutFile to fd 1 ...
	I0803 22:48:43.530122   17189 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:48:43.530132   17189 out.go:304] Setting ErrFile to fd 2...
	I0803 22:48:43.530137   17189 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:48:43.530306   17189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 22:48:43.530869   17189 out.go:298] Setting JSON to true
	I0803 22:48:43.531757   17189 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1867,"bootTime":1722723456,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 22:48:43.531814   17189 start.go:139] virtualization: kvm guest
	I0803 22:48:43.533799   17189 out.go:97] [download-only-312107] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 22:48:43.533926   17189 notify.go:220] Checking for updates...
	I0803 22:48:43.535284   17189 out.go:169] MINIKUBE_LOCATION=19364
	I0803 22:48:43.536470   17189 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 22:48:43.537721   17189 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 22:48:43.538952   17189 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 22:48:43.540180   17189 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0803 22:48:43.542481   17189 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 22:48:43.542703   17189 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 22:48:43.574981   17189 out.go:97] Using the kvm2 driver based on user configuration
	I0803 22:48:43.575013   17189 start.go:297] selected driver: kvm2
	I0803 22:48:43.575019   17189 start.go:901] validating driver "kvm2" against <nil>
	I0803 22:48:43.575356   17189 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 22:48:43.575435   17189 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 22:48:43.590096   17189 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 22:48:43.590143   17189 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 22:48:43.590622   17189 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0803 22:48:43.590784   17189 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 22:48:43.590846   17189 cni.go:84] Creating CNI manager for ""
	I0803 22:48:43.590863   17189 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 22:48:43.590877   17189 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 22:48:43.590952   17189 start.go:340] cluster config:
	{Name:download-only-312107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-312107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 22:48:43.591051   17189 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 22:48:43.592598   17189 out.go:97] Starting "download-only-312107" primary control-plane node in "download-only-312107" cluster
	I0803 22:48:43.592616   17189 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 22:48:43.760073   17189 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 22:48:43.760109   17189 cache.go:56] Caching tarball of preloaded images
	I0803 22:48:43.760266   17189 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 22:48:43.762109   17189 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0803 22:48:43.762136   17189 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0803 22:48:43.872337   17189 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 22:48:58.444718   17189 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0803 22:48:58.444826   17189 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-312107 host does not exist
	  To start a cluster, run: "minikube start -p download-only-312107"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-312107
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (50.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-598666 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-598666 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (50.534148122s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (50.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-598666
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-598666: exit status 85 (56.33085ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-013110 | jenkins | v1.33.1 | 03 Aug 24 22:47 UTC |                     |
	|         | -p download-only-013110           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| delete  | -p download-only-013110           | download-only-013110 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| start   | -o=json --download-only           | download-only-312107 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC |                     |
	|         | -p download-only-312107           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| delete  | -p download-only-312107           | download-only-312107 | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| start   | -o=json --download-only           | download-only-598666 | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC |                     |
	|         | -p download-only-598666           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 22:49:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 22:49:00.455711   17427 out.go:291] Setting OutFile to fd 1 ...
	I0803 22:49:00.455997   17427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:49:00.456006   17427 out.go:304] Setting ErrFile to fd 2...
	I0803 22:49:00.456010   17427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:49:00.456193   17427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 22:49:00.456730   17427 out.go:298] Setting JSON to true
	I0803 22:49:00.457572   17427 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1884,"bootTime":1722723456,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 22:49:00.457628   17427 start.go:139] virtualization: kvm guest
	I0803 22:49:00.459753   17427 out.go:97] [download-only-598666] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 22:49:00.459876   17427 notify.go:220] Checking for updates...
	I0803 22:49:00.461251   17427 out.go:169] MINIKUBE_LOCATION=19364
	I0803 22:49:00.462685   17427 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 22:49:00.463998   17427 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 22:49:00.465464   17427 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 22:49:00.466817   17427 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0803 22:49:00.469476   17427 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 22:49:00.469681   17427 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 22:49:00.500744   17427 out.go:97] Using the kvm2 driver based on user configuration
	I0803 22:49:00.500772   17427 start.go:297] selected driver: kvm2
	I0803 22:49:00.500779   17427 start.go:901] validating driver "kvm2" against <nil>
	I0803 22:49:00.501113   17427 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 22:49:00.501180   17427 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-9607/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 22:49:00.515005   17427 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 22:49:00.515052   17427 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 22:49:00.515516   17427 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0803 22:49:00.515661   17427 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 22:49:00.515715   17427 cni.go:84] Creating CNI manager for ""
	I0803 22:49:00.515726   17427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 22:49:00.515733   17427 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 22:49:00.515781   17427 start.go:340] cluster config:
	{Name:download-only-598666 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-598666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 22:49:00.515880   17427 iso.go:125] acquiring lock: {Name:mk4571d359bb27afcefe7b409c27766a0c6a8f14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 22:49:00.517430   17427 out.go:97] Starting "download-only-598666" primary control-plane node in "download-only-598666" cluster
	I0803 22:49:00.517450   17427 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0803 22:49:01.116844   17427 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0803 22:49:01.116876   17427 cache.go:56] Caching tarball of preloaded images
	I0803 22:49:01.117032   17427 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0803 22:49:01.119044   17427 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0803 22:49:01.119070   17427 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0803 22:49:01.233700   17427 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:89b2d75682ccec9e5b50b57ad7b65741 -> /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0803 22:49:12.460411   17427 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0803 22:49:12.460502   17427 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19364-9607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0803 22:49:13.209577   17427 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0803 22:49:13.209902   17427 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/download-only-598666/config.json ...
	I0803 22:49:13.209928   17427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/download-only-598666/config.json: {Name:mk4c6f56abfd587d749d51bf103b254218d4b180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:13.210082   17427 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0803 22:49:13.210215   17427 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19364-9607/.minikube/cache/linux/amd64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-598666 host does not exist
	  To start a cluster, run: "minikube start -p download-only-598666"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-598666
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-308590 --alsologtostderr --binary-mirror http://127.0.0.1:37089 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-308590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-308590
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (104.52s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-855826 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-855826 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m43.536118479s)
helpers_test.go:175: Cleaning up "offline-crio-855826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-855826
--- PASS: TestOffline (104.52s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-110246
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-110246: exit status 85 (51.75384ms)

                                                
                                                
-- stdout --
	* Profile "addons-110246" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-110246"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-110246
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-110246: exit status 85 (52.991974ms)

                                                
                                                
-- stdout --
	* Profile "addons-110246" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-110246"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (212.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-110246 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-110246 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.70290048s)
--- PASS: TestAddons/Setup (212.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-110246 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-110246 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-110246 get secret gcp-auth -n new-namespace: exit status 1 (79.057351ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-110246 logs -l app=gcp-auth -n gcp-auth
addons_test.go:670: (dbg) Run:  kubectl --context addons-110246 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.348817ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-4bhmt" [d9661cee-e4cd-468d-a421-0e709c62e138] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.008520095s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4sg2g" [df0da2d6-2cf2-471c-9b29-c471d61d67b5] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005732554s
addons_test.go:342: (dbg) Run:  kubectl --context addons-110246 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-110246 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-110246 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.640894554s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 ip
2024/08/03 22:54:03 [DEBUG] GET http://192.168.39.9:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-57hc2" [a1cbed6c-aa85-4435-b572-bb8de8dbcf1a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004007805s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-110246
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-110246: (5.793936921s)
--- PASS: TestAddons/parallel/InspektorGadget (10.80s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.35s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.178291ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-zv5cc" [479ff6dd-8760-4dec-8f87-d1236801993f] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.01189406s
addons_test.go:475: (dbg) Run:  kubectl --context addons-110246 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-110246 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.762614971s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (79.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.185813ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-110246 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-110246 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2cac33c0-b27a-40a9-9d10-2373d9c0c03f] Pending
helpers_test.go:344: "task-pv-pod" [2cac33c0-b27a-40a9-9d10-2373d9c0c03f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2cac33c0-b27a-40a9-9d10-2373d9c0c03f] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.003608485s
addons_test.go:590: (dbg) Run:  kubectl --context addons-110246 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-110246 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-110246 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-110246 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-110246 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-110246 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-110246 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7f13328b-100e-4f24-86e9-64d3986356b1] Pending
helpers_test.go:344: "task-pv-pod-restore" [7f13328b-100e-4f24-86e9-64d3986356b1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7f13328b-100e-4f24-86e9-64d3986356b1] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004228038s
addons_test.go:632: (dbg) Run:  kubectl --context addons-110246 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-110246 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-110246 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-110246 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.869984093s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (79.44s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-110246 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-110246 --alsologtostderr -v=1: (1.282825379s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-q4jd5" [52d328ef-a40c-46a6-8cba-db1244ee12c4] Pending
helpers_test.go:344: "headlamp-7867546754-q4jd5" [52d328ef-a40c-46a6-8cba-db1244ee12c4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-q4jd5" [52d328ef-a40c-46a6-8cba-db1244ee12c4] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.004350743s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-110246 addons disable headlamp --alsologtostderr -v=1: (5.857686714s)
--- PASS: TestAddons/parallel/Headlamp (22.15s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-xtfkw" [90be3d01-cb2d-40ed-be8d-6428def1bb90] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004934014s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-110246
--- PASS: TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-110246 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-110246 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-110246 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [88b0fdd5-b968-4459-abca-f9db3d2eca27] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [88b0fdd5-b968-4459-abca-f9db3d2eca27] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [88b0fdd5-b968-4459-abca-f9db3d2eca27] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003900814s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-110246 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 ssh "cat /opt/local-path-provisioner/pvc-35102428-567b-4022-9a55-8047dad0f959_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-110246 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-110246 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-110246 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.535525528s)
--- PASS: TestAddons/parallel/LocalPath (57.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-f6gv6" [5d7278f7-553b-40c0-a2b4-059ba877ae75] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005583566s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-110246
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-ktrf4" [89c9b8d4-4138-4e07-a43b-ce83bb611c29] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00758885s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-110246 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-110246 addons disable yakd --alsologtostderr -v=1: (5.884742866s)
--- PASS: TestAddons/parallel/Yakd (11.89s)

                                                
                                    
x
+
TestCertOptions (47.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-201343 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-201343 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (46.111934947s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-201343 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-201343 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-201343 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-201343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-201343
--- PASS: TestCertOptions (47.53s)

                                                
                                    
x
+
TestCertExpiration (362.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-705918 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-705918 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (51.412903927s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-705918 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-705918 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (2m10.132856683s)
helpers_test.go:175: Cleaning up "cert-expiration-705918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-705918
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-705918: (1.027947136s)
--- PASS: TestCertExpiration (362.57s)

                                                
                                    
x
+
TestForceSystemdFlag (69.09s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-972692 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-972692 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.90441313s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-972692 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-972692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-972692
--- PASS: TestForceSystemdFlag (69.09s)

                                                
                                    
x
+
TestForceSystemdEnv (47.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-505891 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-505891 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.947854729s)
helpers_test.go:175: Cleaning up "force-systemd-env-505891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-505891
--- PASS: TestForceSystemdEnv (47.73s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (8.00s)

                                                
                                    
x
+
TestErrorSpam/setup (42.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-649262 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-649262 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-649262 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-649262 --driver=kvm2  --container-runtime=crio: (42.888784489s)
--- PASS: TestErrorSpam/setup (42.89s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (5.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 stop: (1.606954619s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 stop: (1.821007508s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-649262 --log_dir /tmp/nospam-649262 stop: (1.89899685s)
--- PASS: TestErrorSpam/stop (5.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19364-9607/.minikube/files/etc/test/nested/copy/16795/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (95.24s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-434475 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0803 23:03:27.616429   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:03:27.622635   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:03:27.632967   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:03:27.654127   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:03:27.694508   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:03:27.774836   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:03:27.935224   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:03:28.255843   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:03:28.896799   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:03:30.177574   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:03:32.738606   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:03:37.858781   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:03:48.098993   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:04:08.579927   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-434475 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m35.243936241s)
--- PASS: TestFunctional/serial/StartWithProxy (95.24s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.72s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-434475 --alsologtostderr -v=8
E0803 23:04:49.540246   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-434475 --alsologtostderr -v=8: (41.715204527s)
functional_test.go:659: soft start took 41.715793158s for "functional-434475" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.72s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-434475 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-434475 cache add registry.k8s.io/pause:3.3: (1.136346311s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-434475 cache add registry.k8s.io/pause:latest: (1.038572187s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-434475 /tmp/TestFunctionalserialCacheCmdcacheadd_local1225757213/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 cache add minikube-local-cache-test:functional-434475
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-434475 cache add minikube-local-cache-test:functional-434475: (1.887684715s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 cache delete minikube-local-cache-test:functional-434475
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-434475
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434475 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (218.025506ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 kubectl -- --context functional-434475 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-434475 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-434475 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-434475 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.718735159s)
functional_test.go:757: restart took 29.718855797s for "functional-434475" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (29.72s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-434475 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-434475 logs: (1.458945331s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 logs --file /tmp/TestFunctionalserialLogsFileCmd2086284614/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-434475 logs --file /tmp/TestFunctionalserialLogsFileCmd2086284614/001/logs.txt: (1.447950484s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.7s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-434475 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-434475
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-434475: exit status 115 (273.0019ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.198:30233 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-434475 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-434475 delete -f testdata/invalidsvc.yaml: (1.234243863s)
--- PASS: TestFunctional/serial/InvalidService (4.70s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434475 config get cpus: exit status 14 (46.527227ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434475 config get cpus: exit status 14 (45.382178ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-434475 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-434475 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 27152: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-434475 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-434475 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (136.994551ms)

                                                
                                                
-- stdout --
	* [functional-434475] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:06:24.652989   26868 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:06:24.653112   26868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:06:24.653125   26868 out.go:304] Setting ErrFile to fd 2...
	I0803 23:06:24.653132   26868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:06:24.653346   26868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:06:24.654011   26868 out.go:298] Setting JSON to false
	I0803 23:06:24.655159   26868 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2929,"bootTime":1722723456,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:06:24.655217   26868 start.go:139] virtualization: kvm guest
	I0803 23:06:24.657265   26868 out.go:177] * [functional-434475] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:06:24.658581   26868 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:06:24.658638   26868 notify.go:220] Checking for updates...
	I0803 23:06:24.661018   26868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:06:24.662285   26868 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:06:24.663520   26868 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:06:24.664825   26868 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:06:24.665987   26868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:06:24.667373   26868 config.go:182] Loaded profile config "functional-434475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:06:24.667788   26868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:06:24.667862   26868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:06:24.684086   26868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46195
	I0803 23:06:24.684446   26868 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:06:24.684956   26868 main.go:141] libmachine: Using API Version  1
	I0803 23:06:24.684975   26868 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:06:24.685281   26868 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:06:24.685527   26868 main.go:141] libmachine: (functional-434475) Calling .DriverName
	I0803 23:06:24.685788   26868 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:06:24.686122   26868 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:06:24.686158   26868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:06:24.700651   26868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40327
	I0803 23:06:24.701027   26868 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:06:24.701527   26868 main.go:141] libmachine: Using API Version  1
	I0803 23:06:24.701557   26868 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:06:24.701877   26868 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:06:24.702049   26868 main.go:141] libmachine: (functional-434475) Calling .DriverName
	I0803 23:06:24.736845   26868 out.go:177] * Using the kvm2 driver based on existing profile
	I0803 23:06:24.738167   26868 start.go:297] selected driver: kvm2
	I0803 23:06:24.738189   26868 start.go:901] validating driver "kvm2" against &{Name:functional-434475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-434475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:06:24.738321   26868 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:06:24.740653   26868 out.go:177] 
	W0803 23:06:24.741882   26868 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0803 23:06:24.743347   26868 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-434475 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-434475 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-434475 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (131.182967ms)

                                                
                                                
-- stdout --
	* [functional-434475] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:06:23.474201   26658 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:06:23.474317   26658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:06:23.474326   26658 out.go:304] Setting ErrFile to fd 2...
	I0803 23:06:23.474331   26658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:06:23.474618   26658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:06:23.475124   26658 out.go:298] Setting JSON to false
	I0803 23:06:23.475989   26658 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2927,"bootTime":1722723456,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:06:23.476047   26658 start.go:139] virtualization: kvm guest
	I0803 23:06:23.478313   26658 out.go:177] * [functional-434475] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0803 23:06:23.480000   26658 notify.go:220] Checking for updates...
	I0803 23:06:23.480022   26658 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:06:23.481570   26658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:06:23.482998   26658 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0803 23:06:23.484568   26658 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0803 23:06:23.486102   26658 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:06:23.487802   26658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:06:23.489920   26658 config.go:182] Loaded profile config "functional-434475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:06:23.490361   26658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:06:23.490427   26658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:06:23.505194   26658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45889
	I0803 23:06:23.505697   26658 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:06:23.506359   26658 main.go:141] libmachine: Using API Version  1
	I0803 23:06:23.506390   26658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:06:23.506715   26658 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:06:23.506926   26658 main.go:141] libmachine: (functional-434475) Calling .DriverName
	I0803 23:06:23.507156   26658 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:06:23.507519   26658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:06:23.507564   26658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:06:23.522112   26658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34211
	I0803 23:06:23.522539   26658 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:06:23.523038   26658 main.go:141] libmachine: Using API Version  1
	I0803 23:06:23.523062   26658 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:06:23.523333   26658 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:06:23.523503   26658 main.go:141] libmachine: (functional-434475) Calling .DriverName
	I0803 23:06:23.555629   26658 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0803 23:06:23.556675   26658 start.go:297] selected driver: kvm2
	I0803 23:06:23.556689   26658 start.go:901] validating driver "kvm2" against &{Name:functional-434475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-434475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:06:23.556801   26658 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:06:23.559410   26658 out.go:177] 
	W0803 23:06:23.560793   26658 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0803 23:06:23.561722   26658 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (23.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-434475 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-434475 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-9hxds" [01138de9-e51b-4b15-b169-17816cf7a5ff] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-9hxds" [01138de9-e51b-4b15-b169-17816cf7a5ff] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.005265468s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.198:31344
functional_test.go:1671: http://192.168.39.198:31344: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-9hxds

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.198:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.198:31344
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (23.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [47cd295b-aa56-4004-bca4-a3139ca811c7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005659998s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-434475 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-434475 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-434475 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-434475 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-434475 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d3d9e54e-e19c-48e2-85b7-0a7a5e3bd174] Pending
helpers_test.go:344: "sp-pod" [d3d9e54e-e19c-48e2-85b7-0a7a5e3bd174] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d3d9e54e-e19c-48e2-85b7-0a7a5e3bd174] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.005443313s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-434475 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-434475 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-434475 delete -f testdata/storage-provisioner/pod.yaml: (1.435739119s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-434475 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2ca02c09-b1b4-47ee-a233-ca94b5df7994] Pending
helpers_test.go:344: "sp-pod" [2ca02c09-b1b4-47ee-a233-ca94b5df7994] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2ca02c09-b1b4-47ee-a233-ca94b5df7994] Running
2024/08/03 23:06:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004653651s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-434475 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh -n functional-434475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 cp functional-434475:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd818899343/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh -n functional-434475 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh -n functional-434475 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-434475 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-d9qcv" [208468b7-41c8-4ba5-b620-b4f3cba1ee84] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-d9qcv" [208468b7-41c8-4ba5-b620-b4f3cba1ee84] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004846718s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-434475 exec mysql-64454c8b5c-d9qcv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-434475 exec mysql-64454c8b5c-d9qcv -- mysql -ppassword -e "show databases;": exit status 1 (152.927298ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-434475 exec mysql-64454c8b5c-d9qcv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-434475 exec mysql-64454c8b5c-d9qcv -- mysql -ppassword -e "show databases;": exit status 1 (130.043347ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-434475 exec mysql-64454c8b5c-d9qcv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-434475 exec mysql-64454c8b5c-d9qcv -- mysql -ppassword -e "show databases;": exit status 1 (438.36855ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-434475 exec mysql-64454c8b5c-d9qcv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16795/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "sudo cat /etc/test/nested/copy/16795/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16795.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "sudo cat /etc/ssl/certs/16795.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16795.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "sudo cat /usr/share/ca-certificates/16795.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/167952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "sudo cat /etc/ssl/certs/167952.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/167952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "sudo cat /usr/share/ca-certificates/167952.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-434475 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434475 ssh "sudo systemctl is-active docker": exit status 1 (221.8854ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434475 ssh "sudo systemctl is-active containerd": exit status 1 (211.79697ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-434475 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-434475
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-434475
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-434475 image ls --format short --alsologtostderr:
I0803 23:06:27.977800   27333 out.go:291] Setting OutFile to fd 1 ...
I0803 23:06:27.977934   27333 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:06:27.977943   27333 out.go:304] Setting ErrFile to fd 2...
I0803 23:06:27.977948   27333 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:06:27.978111   27333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
I0803 23:06:27.978612   27333 config.go:182] Loaded profile config "functional-434475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:06:27.978708   27333 config.go:182] Loaded profile config "functional-434475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:06:27.979055   27333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:06:27.979101   27333 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:06:27.993899   27333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38513
I0803 23:06:27.994357   27333 main.go:141] libmachine: () Calling .GetVersion
I0803 23:06:27.994854   27333 main.go:141] libmachine: Using API Version  1
I0803 23:06:27.994877   27333 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:06:27.995199   27333 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:06:27.995367   27333 main.go:141] libmachine: (functional-434475) Calling .GetState
I0803 23:06:27.997031   27333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:06:27.997065   27333 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:06:28.012145   27333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40773
I0803 23:06:28.012480   27333 main.go:141] libmachine: () Calling .GetVersion
I0803 23:06:28.012988   27333 main.go:141] libmachine: Using API Version  1
I0803 23:06:28.013009   27333 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:06:28.013346   27333 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:06:28.013570   27333 main.go:141] libmachine: (functional-434475) Calling .DriverName
I0803 23:06:28.013790   27333 ssh_runner.go:195] Run: systemctl --version
I0803 23:06:28.013823   27333 main.go:141] libmachine: (functional-434475) Calling .GetSSHHostname
I0803 23:06:28.016443   27333 main.go:141] libmachine: (functional-434475) DBG | domain functional-434475 has defined MAC address 52:54:00:70:79:4f in network mk-functional-434475
I0803 23:06:28.016784   27333 main.go:141] libmachine: (functional-434475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:79:4f", ip: ""} in network mk-functional-434475: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:09 +0000 UTC Type:0 Mac:52:54:00:70:79:4f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-434475 Clientid:01:52:54:00:70:79:4f}
I0803 23:06:28.016809   27333 main.go:141] libmachine: (functional-434475) DBG | domain functional-434475 has defined IP address 192.168.39.198 and MAC address 52:54:00:70:79:4f in network mk-functional-434475
I0803 23:06:28.016969   27333 main.go:141] libmachine: (functional-434475) Calling .GetSSHPort
I0803 23:06:28.017179   27333 main.go:141] libmachine: (functional-434475) Calling .GetSSHKeyPath
I0803 23:06:28.017391   27333 main.go:141] libmachine: (functional-434475) Calling .GetSSHUsername
I0803 23:06:28.017576   27333 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/functional-434475/id_rsa Username:docker}
I0803 23:06:28.100201   27333 ssh_runner.go:195] Run: sudo crictl images --output json
I0803 23:06:28.156554   27333 main.go:141] libmachine: Making call to close driver server
I0803 23:06:28.156567   27333 main.go:141] libmachine: (functional-434475) Calling .Close
I0803 23:06:28.156850   27333 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:06:28.156871   27333 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:06:28.156887   27333 main.go:141] libmachine: Making call to close driver server
I0803 23:06:28.156895   27333 main.go:141] libmachine: (functional-434475) Calling .Close
I0803 23:06:28.157186   27333 main.go:141] libmachine: (functional-434475) DBG | Closing plugin on server side
I0803 23:06:28.157188   27333 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:06:28.157217   27333 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-434475 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server           | functional-434475  | 9056ab77afb8e | 4.94MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-434475  | e8beb0441a1b6 | 3.33kB |
| localhost/my-image                      | functional-434475  | 8b2d52cd349d4 | 1.47MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-434475 image ls --format table --alsologtostderr:
I0803 23:06:33.282878   27706 out.go:291] Setting OutFile to fd 1 ...
I0803 23:06:33.283008   27706 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:06:33.283018   27706 out.go:304] Setting ErrFile to fd 2...
I0803 23:06:33.283025   27706 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:06:33.283277   27706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
I0803 23:06:33.284059   27706 config.go:182] Loaded profile config "functional-434475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:06:33.284203   27706 config.go:182] Loaded profile config "functional-434475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:06:33.284756   27706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:06:33.284818   27706 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:06:33.301163   27706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
I0803 23:06:33.301697   27706 main.go:141] libmachine: () Calling .GetVersion
I0803 23:06:33.302252   27706 main.go:141] libmachine: Using API Version  1
I0803 23:06:33.302282   27706 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:06:33.302704   27706 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:06:33.302898   27706 main.go:141] libmachine: (functional-434475) Calling .GetState
I0803 23:06:33.304834   27706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:06:33.304879   27706 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:06:33.319759   27706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40595
I0803 23:06:33.320237   27706 main.go:141] libmachine: () Calling .GetVersion
I0803 23:06:33.320700   27706 main.go:141] libmachine: Using API Version  1
I0803 23:06:33.320726   27706 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:06:33.321145   27706 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:06:33.321388   27706 main.go:141] libmachine: (functional-434475) Calling .DriverName
I0803 23:06:33.321599   27706 ssh_runner.go:195] Run: systemctl --version
I0803 23:06:33.321621   27706 main.go:141] libmachine: (functional-434475) Calling .GetSSHHostname
I0803 23:06:33.324369   27706 main.go:141] libmachine: (functional-434475) DBG | domain functional-434475 has defined MAC address 52:54:00:70:79:4f in network mk-functional-434475
I0803 23:06:33.324870   27706 main.go:141] libmachine: (functional-434475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:79:4f", ip: ""} in network mk-functional-434475: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:09 +0000 UTC Type:0 Mac:52:54:00:70:79:4f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-434475 Clientid:01:52:54:00:70:79:4f}
I0803 23:06:33.324892   27706 main.go:141] libmachine: (functional-434475) DBG | domain functional-434475 has defined IP address 192.168.39.198 and MAC address 52:54:00:70:79:4f in network mk-functional-434475
I0803 23:06:33.325054   27706 main.go:141] libmachine: (functional-434475) Calling .GetSSHPort
I0803 23:06:33.325240   27706 main.go:141] libmachine: (functional-434475) Calling .GetSSHKeyPath
I0803 23:06:33.325413   27706 main.go:141] libmachine: (functional-434475) Calling .GetSSHUsername
I0803 23:06:33.325611   27706 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/functional-434475/id_rsa Username:docker}
I0803 23:06:33.425737   27706 ssh_runner.go:195] Run: sudo crictl images --output json
I0803 23:06:33.497516   27706 main.go:141] libmachine: Making call to close driver server
I0803 23:06:33.497541   27706 main.go:141] libmachine: (functional-434475) Calling .Close
I0803 23:06:33.497822   27706 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:06:33.497854   27706 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:06:33.497862   27706 main.go:141] libmachine: (functional-434475) DBG | Closing plugin on server side
I0803 23:06:33.497869   27706 main.go:141] libmachine: Making call to close driver server
I0803 23:06:33.497880   27706 main.go:141] libmachine: (functional-434475) Calling .Close
I0803 23:06:33.498063   27706 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:06:33.498076   27706 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-434475 image ls --format json --alsologtostderr:
[{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead232
1bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"ba472475c3e372b90c62a1d3ac50df805de9af49818c78902bb72a32e1ca2562","repoDigests":["docker.io/library/19d4cf3bf5bcc
1744f4d81f821514faea46015521fb358c4b9aa5fbea68c3ba9-tmp@sha256:b1fa20811b080345b5ea6b6a9bd190e8e2dab9a7368f538b63586ae5e30bb61e"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"e8beb0441a1b6753529c909b65e18242f01f145d6665f0ea77d56aeb0003e485","repoDigests":["localhost/minikube-local-cache-
test@sha256:37554121dd8e0fa9c987aeaf27084f2796733ea781aae36894d63246e2ed193f"],"repoTags":["localhost/minikube-local-cache-test:functional-434475"],"size":"3330"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functio
nal-434475"],"size":"4943877"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"8b2d52cd349d4a8632bb1d4723282d2c5fed47775ceef1235374f05d2328d435","repoDigests":["localhost/my-image@sha256:12fd225e4b0a88c969e017f92108bda1b5a2d0bb90e7943f9633e301a73588bc"],"repoTags":["localhost/my-image:functional-434475"],"size":"1468599"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id"
:"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d
55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3
.3"],"size":"686139"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-434475 image ls --format json --alsologtostderr:
I0803 23:06:33.040283   27656 out.go:291] Setting OutFile to fd 1 ...
I0803 23:06:33.040401   27656 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:06:33.040412   27656 out.go:304] Setting ErrFile to fd 2...
I0803 23:06:33.040429   27656 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:06:33.040740   27656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
I0803 23:06:33.041525   27656 config.go:182] Loaded profile config "functional-434475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:06:33.041665   27656 config.go:182] Loaded profile config "functional-434475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:06:33.042117   27656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:06:33.042155   27656 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:06:33.056789   27656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40975
I0803 23:06:33.057271   27656 main.go:141] libmachine: () Calling .GetVersion
I0803 23:06:33.057815   27656 main.go:141] libmachine: Using API Version  1
I0803 23:06:33.057834   27656 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:06:33.058127   27656 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:06:33.058327   27656 main.go:141] libmachine: (functional-434475) Calling .GetState
I0803 23:06:33.060039   27656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:06:33.060077   27656 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:06:33.075427   27656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
I0803 23:06:33.075918   27656 main.go:141] libmachine: () Calling .GetVersion
I0803 23:06:33.076410   27656 main.go:141] libmachine: Using API Version  1
I0803 23:06:33.076440   27656 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:06:33.076783   27656 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:06:33.076927   27656 main.go:141] libmachine: (functional-434475) Calling .DriverName
I0803 23:06:33.077150   27656 ssh_runner.go:195] Run: systemctl --version
I0803 23:06:33.077175   27656 main.go:141] libmachine: (functional-434475) Calling .GetSSHHostname
I0803 23:06:33.079550   27656 main.go:141] libmachine: (functional-434475) DBG | domain functional-434475 has defined MAC address 52:54:00:70:79:4f in network mk-functional-434475
I0803 23:06:33.079997   27656 main.go:141] libmachine: (functional-434475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:79:4f", ip: ""} in network mk-functional-434475: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:09 +0000 UTC Type:0 Mac:52:54:00:70:79:4f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-434475 Clientid:01:52:54:00:70:79:4f}
I0803 23:06:33.080047   27656 main.go:141] libmachine: (functional-434475) DBG | domain functional-434475 has defined IP address 192.168.39.198 and MAC address 52:54:00:70:79:4f in network mk-functional-434475
I0803 23:06:33.080190   27656 main.go:141] libmachine: (functional-434475) Calling .GetSSHPort
I0803 23:06:33.080363   27656 main.go:141] libmachine: (functional-434475) Calling .GetSSHKeyPath
I0803 23:06:33.080520   27656 main.go:141] libmachine: (functional-434475) Calling .GetSSHUsername
I0803 23:06:33.080671   27656 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/functional-434475/id_rsa Username:docker}
I0803 23:06:33.178384   27656 ssh_runner.go:195] Run: sudo crictl images --output json
I0803 23:06:33.227299   27656 main.go:141] libmachine: Making call to close driver server
I0803 23:06:33.227312   27656 main.go:141] libmachine: (functional-434475) Calling .Close
I0803 23:06:33.227552   27656 main.go:141] libmachine: (functional-434475) DBG | Closing plugin on server side
I0803 23:06:33.227613   27656 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:06:33.227635   27656 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:06:33.227645   27656 main.go:141] libmachine: Making call to close driver server
I0803 23:06:33.227656   27656 main.go:141] libmachine: (functional-434475) Calling .Close
I0803 23:06:33.227856   27656 main.go:141] libmachine: (functional-434475) DBG | Closing plugin on server side
I0803 23:06:33.227895   27656 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:06:33.227903   27656 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-434475 image ls --format yaml --alsologtostderr:
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: e8beb0441a1b6753529c909b65e18242f01f145d6665f0ea77d56aeb0003e485
repoDigests:
- localhost/minikube-local-cache-test@sha256:37554121dd8e0fa9c987aeaf27084f2796733ea781aae36894d63246e2ed193f
repoTags:
- localhost/minikube-local-cache-test:functional-434475
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-434475
size: "4943877"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-434475 image ls --format yaml --alsologtostderr:
I0803 23:06:28.201090   27357 out.go:291] Setting OutFile to fd 1 ...
I0803 23:06:28.201311   27357 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:06:28.201319   27357 out.go:304] Setting ErrFile to fd 2...
I0803 23:06:28.201323   27357 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:06:28.201543   27357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
I0803 23:06:28.202061   27357 config.go:182] Loaded profile config "functional-434475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:06:28.202149   27357 config.go:182] Loaded profile config "functional-434475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:06:28.202508   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:06:28.202543   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:06:28.217262   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38385
I0803 23:06:28.217737   27357 main.go:141] libmachine: () Calling .GetVersion
I0803 23:06:28.218374   27357 main.go:141] libmachine: Using API Version  1
I0803 23:06:28.218400   27357 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:06:28.218713   27357 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:06:28.218928   27357 main.go:141] libmachine: (functional-434475) Calling .GetState
I0803 23:06:28.220660   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:06:28.220693   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:06:28.235615   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45135
I0803 23:06:28.236040   27357 main.go:141] libmachine: () Calling .GetVersion
I0803 23:06:28.236490   27357 main.go:141] libmachine: Using API Version  1
I0803 23:06:28.236513   27357 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:06:28.236775   27357 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:06:28.237108   27357 main.go:141] libmachine: (functional-434475) Calling .DriverName
I0803 23:06:28.237290   27357 ssh_runner.go:195] Run: systemctl --version
I0803 23:06:28.237310   27357 main.go:141] libmachine: (functional-434475) Calling .GetSSHHostname
I0803 23:06:28.239973   27357 main.go:141] libmachine: (functional-434475) DBG | domain functional-434475 has defined MAC address 52:54:00:70:79:4f in network mk-functional-434475
I0803 23:06:28.240339   27357 main.go:141] libmachine: (functional-434475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:79:4f", ip: ""} in network mk-functional-434475: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:09 +0000 UTC Type:0 Mac:52:54:00:70:79:4f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-434475 Clientid:01:52:54:00:70:79:4f}
I0803 23:06:28.240367   27357 main.go:141] libmachine: (functional-434475) DBG | domain functional-434475 has defined IP address 192.168.39.198 and MAC address 52:54:00:70:79:4f in network mk-functional-434475
I0803 23:06:28.240518   27357 main.go:141] libmachine: (functional-434475) Calling .GetSSHPort
I0803 23:06:28.240685   27357 main.go:141] libmachine: (functional-434475) Calling .GetSSHKeyPath
I0803 23:06:28.240839   27357 main.go:141] libmachine: (functional-434475) Calling .GetSSHUsername
I0803 23:06:28.240943   27357 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/functional-434475/id_rsa Username:docker}
I0803 23:06:28.324028   27357 ssh_runner.go:195] Run: sudo crictl images --output json
I0803 23:06:28.365153   27357 main.go:141] libmachine: Making call to close driver server
I0803 23:06:28.365166   27357 main.go:141] libmachine: (functional-434475) Calling .Close
I0803 23:06:28.365472   27357 main.go:141] libmachine: (functional-434475) DBG | Closing plugin on server side
I0803 23:06:28.365497   27357 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:06:28.365512   27357 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:06:28.365523   27357 main.go:141] libmachine: Making call to close driver server
I0803 23:06:28.365531   27357 main.go:141] libmachine: (functional-434475) Calling .Close
I0803 23:06:28.365747   27357 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:06:28.365760   27357 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:06:28.365842   27357 main.go:141] libmachine: (functional-434475) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434475 ssh pgrep buildkitd: exit status 1 (187.3932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image build -t localhost/my-image:functional-434475 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-434475 image build -t localhost/my-image:functional-434475 testdata/build --alsologtostderr: (4.129945429s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-434475 image build -t localhost/my-image:functional-434475 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ba472475c3e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-434475
--> 8b2d52cd349
Successfully tagged localhost/my-image:functional-434475
8b2d52cd349d4a8632bb1d4723282d2c5fed47775ceef1235374f05d2328d435
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-434475 image build -t localhost/my-image:functional-434475 testdata/build --alsologtostderr:
I0803 23:06:28.598172   27411 out.go:291] Setting OutFile to fd 1 ...
I0803 23:06:28.598433   27411 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:06:28.598444   27411 out.go:304] Setting ErrFile to fd 2...
I0803 23:06:28.598449   27411 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:06:28.598610   27411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
I0803 23:06:28.599136   27411 config.go:182] Loaded profile config "functional-434475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:06:28.599632   27411 config.go:182] Loaded profile config "functional-434475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:06:28.599989   27411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:06:28.600032   27411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:06:28.614970   27411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37909
I0803 23:06:28.615492   27411 main.go:141] libmachine: () Calling .GetVersion
I0803 23:06:28.616076   27411 main.go:141] libmachine: Using API Version  1
I0803 23:06:28.616100   27411 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:06:28.616395   27411 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:06:28.616710   27411 main.go:141] libmachine: (functional-434475) Calling .GetState
I0803 23:06:28.618701   27411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:06:28.618737   27411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:06:28.636959   27411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
I0803 23:06:28.637445   27411 main.go:141] libmachine: () Calling .GetVersion
I0803 23:06:28.638111   27411 main.go:141] libmachine: Using API Version  1
I0803 23:06:28.638147   27411 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:06:28.638473   27411 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:06:28.638673   27411 main.go:141] libmachine: (functional-434475) Calling .DriverName
I0803 23:06:28.638890   27411 ssh_runner.go:195] Run: systemctl --version
I0803 23:06:28.638923   27411 main.go:141] libmachine: (functional-434475) Calling .GetSSHHostname
I0803 23:06:28.641847   27411 main.go:141] libmachine: (functional-434475) DBG | domain functional-434475 has defined MAC address 52:54:00:70:79:4f in network mk-functional-434475
I0803 23:06:28.642141   27411 main.go:141] libmachine: (functional-434475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:79:4f", ip: ""} in network mk-functional-434475: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:09 +0000 UTC Type:0 Mac:52:54:00:70:79:4f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-434475 Clientid:01:52:54:00:70:79:4f}
I0803 23:06:28.642171   27411 main.go:141] libmachine: (functional-434475) DBG | domain functional-434475 has defined IP address 192.168.39.198 and MAC address 52:54:00:70:79:4f in network mk-functional-434475
I0803 23:06:28.642296   27411 main.go:141] libmachine: (functional-434475) Calling .GetSSHPort
I0803 23:06:28.642466   27411 main.go:141] libmachine: (functional-434475) Calling .GetSSHKeyPath
I0803 23:06:28.642613   27411 main.go:141] libmachine: (functional-434475) Calling .GetSSHUsername
I0803 23:06:28.642747   27411 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/functional-434475/id_rsa Username:docker}
I0803 23:06:28.735947   27411 build_images.go:161] Building image from path: /tmp/build.444959134.tar
I0803 23:06:28.736017   27411 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0803 23:06:28.752532   27411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.444959134.tar
I0803 23:06:28.758694   27411 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.444959134.tar: stat -c "%s %y" /var/lib/minikube/build/build.444959134.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.444959134.tar': No such file or directory
I0803 23:06:28.758720   27411 ssh_runner.go:362] scp /tmp/build.444959134.tar --> /var/lib/minikube/build/build.444959134.tar (3072 bytes)
I0803 23:06:28.788670   27411 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.444959134
I0803 23:06:28.804985   27411 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.444959134 -xf /var/lib/minikube/build/build.444959134.tar
I0803 23:06:28.820666   27411 crio.go:315] Building image: /var/lib/minikube/build/build.444959134
I0803 23:06:28.820744   27411 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-434475 /var/lib/minikube/build/build.444959134 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0803 23:06:32.633643   27411 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-434475 /var/lib/minikube/build/build.444959134 --cgroup-manager=cgroupfs: (3.812873119s)
I0803 23:06:32.633766   27411 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.444959134
I0803 23:06:32.658224   27411 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.444959134.tar
I0803 23:06:32.680995   27411 build_images.go:217] Built localhost/my-image:functional-434475 from /tmp/build.444959134.tar
I0803 23:06:32.681027   27411 build_images.go:133] succeeded building to: functional-434475
I0803 23:06:32.681032   27411 build_images.go:134] failed building to: 
I0803 23:06:32.681053   27411 main.go:141] libmachine: Making call to close driver server
I0803 23:06:32.681061   27411 main.go:141] libmachine: (functional-434475) Calling .Close
I0803 23:06:32.681390   27411 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:06:32.681412   27411 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:06:32.681423   27411 main.go:141] libmachine: Making call to close driver server
I0803 23:06:32.681433   27411 main.go:141] libmachine: (functional-434475) Calling .Close
I0803 23:06:32.681666   27411 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:06:32.681683   27411 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.944495714s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-434475
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "233.035999ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "44.678709ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image load --daemon docker.io/kicbase/echo-server:functional-434475 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-434475 image load --daemon docker.io/kicbase/echo-server:functional-434475 --alsologtostderr: (1.241704978s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "223.591548ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "48.933065ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image load --daemon docker.io/kicbase/echo-server:functional-434475 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-434475
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image load --daemon docker.io/kicbase/echo-server:functional-434475 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-434475 image load --daemon docker.io/kicbase/echo-server:functional-434475 --alsologtostderr: (4.146378069s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image save docker.io/kicbase/echo-server:functional-434475 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
E0803 23:06:11.461402   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-434475 image save docker.io/kicbase/echo-server:functional-434475 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.17613157s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image rm docker.io/kicbase/echo-server:functional-434475 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-434475 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.998370469s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-434475
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 image save --daemon docker.io/kicbase/echo-server:functional-434475 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-434475
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-434475 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-434475 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-mfn7z" [b7eb6327-fb56-429a-970a-c94aa5701cf9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-mfn7z" [b7eb6327-fb56-429a-970a-c94aa5701cf9] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.00399853s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-434475 /tmp/TestFunctionalparallelMountCmdany-port3342436151/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722726383567291919" to /tmp/TestFunctionalparallelMountCmdany-port3342436151/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722726383567291919" to /tmp/TestFunctionalparallelMountCmdany-port3342436151/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722726383567291919" to /tmp/TestFunctionalparallelMountCmdany-port3342436151/001/test-1722726383567291919
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434475 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (236.409817ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  3 23:06 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  3 23:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  3 23:06 test-1722726383567291919
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh cat /mount-9p/test-1722726383567291919
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-434475 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [59860c58-0bc9-4a3d-8068-caaa27c82d26] Pending
helpers_test.go:344: "busybox-mount" [59860c58-0bc9-4a3d-8068-caaa27c82d26] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [59860c58-0bc9-4a3d-8068-caaa27c82d26] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [59860c58-0bc9-4a3d-8068-caaa27c82d26] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.018244208s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-434475 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-434475 /tmp/TestFunctionalparallelMountCmdany-port3342436151/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 service list -o json
functional_test.go:1490: Took "487.836881ms" to run "out/minikube-linux-amd64 -p functional-434475 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.198:31042
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.198:31042
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-434475 /tmp/TestFunctionalparallelMountCmdspecific-port1015668780/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434475 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (232.34188ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-434475 /tmp/TestFunctionalparallelMountCmdspecific-port1015668780/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434475 ssh "sudo umount -f /mount-9p": exit status 1 (190.276132ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-434475 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-434475 /tmp/TestFunctionalparallelMountCmdspecific-port1015668780/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-434475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup815852343/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-434475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup815852343/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-434475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup815852343/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-434475 ssh "findmnt -T" /mount1: exit status 1 (203.516041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-434475 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-434475 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-434475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup815852343/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-434475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup815852343/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-434475 /tmp/TestFunctionalparallelMountCmdVerifyCleanup815852343/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-434475
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-434475
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-434475
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (277s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-076508 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0803 23:08:27.616446   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:08:55.301652   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:10:58.007760   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:10:58.013126   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:10:58.023415   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:10:58.043707   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:10:58.083987   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:10:58.164310   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:10:58.324818   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:10:58.645636   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:10:59.286538   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:11:00.566672   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:11:03.127687   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:11:08.248214   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:11:18.489062   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-076508 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m36.352796555s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (277.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-076508 -- rollout status deployment/busybox: (3.962075189s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-9mswn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-nfwfw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-wlr2g -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-9mswn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-nfwfw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-wlr2g -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-9mswn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-nfwfw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-wlr2g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-9mswn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-9mswn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-nfwfw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-nfwfw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-wlr2g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076508 -- exec busybox-fc5497c4f-wlr2g -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-076508 -v=7 --alsologtostderr
E0803 23:11:38.969504   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:12:19.930392   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-076508 -v=7 --alsologtostderr: (57.660880426s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-076508 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp testdata/cp-test.txt ha-076508:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile214764297/001/cp-test_ha-076508.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508:/home/docker/cp-test.txt ha-076508-m02:/home/docker/cp-test_ha-076508_ha-076508-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m02 "sudo cat /home/docker/cp-test_ha-076508_ha-076508-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508:/home/docker/cp-test.txt ha-076508-m03:/home/docker/cp-test_ha-076508_ha-076508-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m03 "sudo cat /home/docker/cp-test_ha-076508_ha-076508-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508:/home/docker/cp-test.txt ha-076508-m04:/home/docker/cp-test_ha-076508_ha-076508-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m04 "sudo cat /home/docker/cp-test_ha-076508_ha-076508-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp testdata/cp-test.txt ha-076508-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile214764297/001/cp-test_ha-076508-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508-m02:/home/docker/cp-test.txt ha-076508:/home/docker/cp-test_ha-076508-m02_ha-076508.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508 "sudo cat /home/docker/cp-test_ha-076508-m02_ha-076508.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508-m02:/home/docker/cp-test.txt ha-076508-m03:/home/docker/cp-test_ha-076508-m02_ha-076508-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m03 "sudo cat /home/docker/cp-test_ha-076508-m02_ha-076508-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508-m02:/home/docker/cp-test.txt ha-076508-m04:/home/docker/cp-test_ha-076508-m02_ha-076508-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m04 "sudo cat /home/docker/cp-test_ha-076508-m02_ha-076508-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp testdata/cp-test.txt ha-076508-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile214764297/001/cp-test_ha-076508-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt ha-076508:/home/docker/cp-test_ha-076508-m03_ha-076508.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508 "sudo cat /home/docker/cp-test_ha-076508-m03_ha-076508.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt ha-076508-m02:/home/docker/cp-test_ha-076508-m03_ha-076508-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m02 "sudo cat /home/docker/cp-test_ha-076508-m03_ha-076508-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508-m03:/home/docker/cp-test.txt ha-076508-m04:/home/docker/cp-test_ha-076508-m03_ha-076508-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m04 "sudo cat /home/docker/cp-test_ha-076508-m03_ha-076508-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp testdata/cp-test.txt ha-076508-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile214764297/001/cp-test_ha-076508-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt ha-076508:/home/docker/cp-test_ha-076508-m04_ha-076508.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508 "sudo cat /home/docker/cp-test_ha-076508-m04_ha-076508.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt ha-076508-m02:/home/docker/cp-test_ha-076508-m04_ha-076508-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m02 "sudo cat /home/docker/cp-test_ha-076508-m04_ha-076508-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 cp ha-076508-m04:/home/docker/cp-test.txt ha-076508-m03:/home/docker/cp-test_ha-076508-m04_ha-076508-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 ssh -n ha-076508-m03 "sudo cat /home/docker/cp-test_ha-076508-m04_ha-076508-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.471620438s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-076508 node delete m03 -v=7 --alsologtostderr: (16.447403473s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (345.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-076508 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0803 23:25:58.006952   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:27:21.054358   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0803 23:28:27.618673   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-076508 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m45.185962185s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (345.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-076508 --control-plane -v=7 --alsologtostderr
E0803 23:30:58.007796   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-076508 --control-plane -v=7 --alsologtostderr: (1m21.11033919s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-076508 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-116976 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-116976 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.802056697s)
--- PASS: TestJSONOutput/start/Command (60.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-116976 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-116976 --output=json --user=testUser
E0803 23:33:27.616305   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-116976 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-116976 --output=json --user=testUser: (7.336519004s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-900711 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-900711 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.444479ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"086dadef-b7e5-42b2-8cc3-efd7ccbda06d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-900711] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b87420ff-53c3-4c4d-9fe3-3355bb98d21d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19364"}}
	{"specversion":"1.0","id":"090d546b-a74a-4d7a-bdd8-3aa424b32cf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eb542054-210e-4dca-b9b1-bd1d4af8be91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig"}}
	{"specversion":"1.0","id":"bc13e80b-8afb-4f0f-b210-47604e7b3e65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube"}}
	{"specversion":"1.0","id":"75ffe5de-3061-42ff-a8ac-d5fe8d20f2e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"dccf3c7f-1451-4e34-8f58-c234f618c69b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"313425c9-13df-4552-83cd-c18cd1f3070d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-900711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-900711
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (88.74s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-719278 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-719278 --driver=kvm2  --container-runtime=crio: (43.652585432s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-721533 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-721533 --driver=kvm2  --container-runtime=crio: (42.277932317s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-719278
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-721533
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-721533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-721533
helpers_test.go:175: Cleaning up "first-719278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-719278
--- PASS: TestMinikubeProfile (88.74s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-578203 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-578203 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.113210879s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-578203 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-578203 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-593973 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0803 23:35:58.007655   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-593973 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.492053855s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-593973 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-593973 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-578203 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-593973 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-593973 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-593973
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-593973: (1.289762015s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-593973
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-593973: (21.818546728s)
--- PASS: TestMountStart/serial/RestartStopped (22.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-593973 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-593973 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-626202 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0803 23:36:30.663246   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:38:27.616588   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-626202 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m11.035045858s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-626202 -- rollout status deployment/busybox: (3.880065733s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- exec busybox-fc5497c4f-hpnvb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- exec busybox-fc5497c4f-lj84f -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- exec busybox-fc5497c4f-hpnvb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- exec busybox-fc5497c4f-lj84f -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- exec busybox-fc5497c4f-hpnvb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- exec busybox-fc5497c4f-lj84f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- exec busybox-fc5497c4f-hpnvb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- exec busybox-fc5497c4f-hpnvb -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- exec busybox-fc5497c4f-lj84f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-626202 -- exec busybox-fc5497c4f-lj84f -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-626202 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-626202 -v 3 --alsologtostderr: (51.052513308s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-626202 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 cp testdata/cp-test.txt multinode-626202:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 cp multinode-626202:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile807028884/001/cp-test_multinode-626202.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 cp multinode-626202:/home/docker/cp-test.txt multinode-626202-m02:/home/docker/cp-test_multinode-626202_multinode-626202-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202-m02 "sudo cat /home/docker/cp-test_multinode-626202_multinode-626202-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 cp multinode-626202:/home/docker/cp-test.txt multinode-626202-m03:/home/docker/cp-test_multinode-626202_multinode-626202-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202-m03 "sudo cat /home/docker/cp-test_multinode-626202_multinode-626202-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 cp testdata/cp-test.txt multinode-626202-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 cp multinode-626202-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile807028884/001/cp-test_multinode-626202-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 cp multinode-626202-m02:/home/docker/cp-test.txt multinode-626202:/home/docker/cp-test_multinode-626202-m02_multinode-626202.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202 "sudo cat /home/docker/cp-test_multinode-626202-m02_multinode-626202.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 cp multinode-626202-m02:/home/docker/cp-test.txt multinode-626202-m03:/home/docker/cp-test_multinode-626202-m02_multinode-626202-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202-m03 "sudo cat /home/docker/cp-test_multinode-626202-m02_multinode-626202-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 cp testdata/cp-test.txt multinode-626202-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 cp multinode-626202-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile807028884/001/cp-test_multinode-626202-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 cp multinode-626202-m03:/home/docker/cp-test.txt multinode-626202:/home/docker/cp-test_multinode-626202-m03_multinode-626202.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202 "sudo cat /home/docker/cp-test_multinode-626202-m03_multinode-626202.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 cp multinode-626202-m03:/home/docker/cp-test.txt multinode-626202-m02:/home/docker/cp-test_multinode-626202-m03_multinode-626202-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 ssh -n multinode-626202-m02 "sudo cat /home/docker/cp-test_multinode-626202-m03_multinode-626202-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-626202 node stop m03: (1.440025315s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-626202 status: exit status 7 (417.136404ms)

                                                
                                                
-- stdout --
	multinode-626202
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-626202-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-626202-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-626202 status --alsologtostderr: exit status 7 (423.067682ms)

                                                
                                                
-- stdout --
	multinode-626202
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-626202-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-626202-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:39:46.954551   46210 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:39:46.954700   46210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:39:46.954711   46210 out.go:304] Setting ErrFile to fd 2...
	I0803 23:39:46.954717   46210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:39:46.954938   46210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0803 23:39:46.955130   46210 out.go:298] Setting JSON to false
	I0803 23:39:46.955161   46210 mustload.go:65] Loading cluster: multinode-626202
	I0803 23:39:46.955282   46210 notify.go:220] Checking for updates...
	I0803 23:39:46.955568   46210 config.go:182] Loaded profile config "multinode-626202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:39:46.955592   46210 status.go:255] checking status of multinode-626202 ...
	I0803 23:39:46.956008   46210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:39:46.956064   46210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:39:46.976254   46210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I0803 23:39:46.976665   46210 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:39:46.977263   46210 main.go:141] libmachine: Using API Version  1
	I0803 23:39:46.977283   46210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:39:46.977618   46210 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:39:46.977801   46210 main.go:141] libmachine: (multinode-626202) Calling .GetState
	I0803 23:39:46.979229   46210 status.go:330] multinode-626202 host status = "Running" (err=<nil>)
	I0803 23:39:46.979248   46210 host.go:66] Checking if "multinode-626202" exists ...
	I0803 23:39:46.979522   46210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:39:46.979563   46210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:39:46.995195   46210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46409
	I0803 23:39:46.995567   46210 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:39:46.996041   46210 main.go:141] libmachine: Using API Version  1
	I0803 23:39:46.996060   46210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:39:46.996354   46210 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:39:46.996548   46210 main.go:141] libmachine: (multinode-626202) Calling .GetIP
	I0803 23:39:46.999072   46210 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:39:46.999440   46210 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:39:46.999462   46210 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:39:46.999663   46210 host.go:66] Checking if "multinode-626202" exists ...
	I0803 23:39:47.000127   46210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:39:47.000173   46210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:39:47.015240   46210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0803 23:39:47.015668   46210 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:39:47.016127   46210 main.go:141] libmachine: Using API Version  1
	I0803 23:39:47.016157   46210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:39:47.016395   46210 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:39:47.016584   46210 main.go:141] libmachine: (multinode-626202) Calling .DriverName
	I0803 23:39:47.016743   46210 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:39:47.016775   46210 main.go:141] libmachine: (multinode-626202) Calling .GetSSHHostname
	I0803 23:39:47.019394   46210 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:39:47.019957   46210 main.go:141] libmachine: (multinode-626202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:5e:6f", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:36:43 +0000 UTC Type:0 Mac:52:54:00:1f:5e:6f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-626202 Clientid:01:52:54:00:1f:5e:6f}
	I0803 23:39:47.019993   46210 main.go:141] libmachine: (multinode-626202) DBG | domain multinode-626202 has defined IP address 192.168.39.176 and MAC address 52:54:00:1f:5e:6f in network mk-multinode-626202
	I0803 23:39:47.020045   46210 main.go:141] libmachine: (multinode-626202) Calling .GetSSHPort
	I0803 23:39:47.020218   46210 main.go:141] libmachine: (multinode-626202) Calling .GetSSHKeyPath
	I0803 23:39:47.020397   46210 main.go:141] libmachine: (multinode-626202) Calling .GetSSHUsername
	I0803 23:39:47.020489   46210 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/multinode-626202/id_rsa Username:docker}
	I0803 23:39:47.104893   46210 ssh_runner.go:195] Run: systemctl --version
	I0803 23:39:47.111240   46210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:39:47.127058   46210 kubeconfig.go:125] found "multinode-626202" server: "https://192.168.39.176:8443"
	I0803 23:39:47.127089   46210 api_server.go:166] Checking apiserver status ...
	I0803 23:39:47.127132   46210 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:39:47.141429   46210 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1188/cgroup
	W0803 23:39:47.151834   46210 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1188/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:39:47.151892   46210 ssh_runner.go:195] Run: ls
	I0803 23:39:47.156533   46210 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I0803 23:39:47.160672   46210 api_server.go:279] https://192.168.39.176:8443/healthz returned 200:
	ok
	I0803 23:39:47.160701   46210 status.go:422] multinode-626202 apiserver status = Running (err=<nil>)
	I0803 23:39:47.160715   46210 status.go:257] multinode-626202 status: &{Name:multinode-626202 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:39:47.160733   46210 status.go:255] checking status of multinode-626202-m02 ...
	I0803 23:39:47.161056   46210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:39:47.161091   46210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:39:47.176459   46210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42253
	I0803 23:39:47.176929   46210 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:39:47.177429   46210 main.go:141] libmachine: Using API Version  1
	I0803 23:39:47.177448   46210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:39:47.177750   46210 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:39:47.177962   46210 main.go:141] libmachine: (multinode-626202-m02) Calling .GetState
	I0803 23:39:47.179832   46210 status.go:330] multinode-626202-m02 host status = "Running" (err=<nil>)
	I0803 23:39:47.179847   46210 host.go:66] Checking if "multinode-626202-m02" exists ...
	I0803 23:39:47.180189   46210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:39:47.180244   46210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:39:47.195238   46210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33505
	I0803 23:39:47.195613   46210 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:39:47.196048   46210 main.go:141] libmachine: Using API Version  1
	I0803 23:39:47.196069   46210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:39:47.196348   46210 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:39:47.196518   46210 main.go:141] libmachine: (multinode-626202-m02) Calling .GetIP
	I0803 23:39:47.199093   46210 main.go:141] libmachine: (multinode-626202-m02) DBG | domain multinode-626202-m02 has defined MAC address 52:54:00:d6:fd:83 in network mk-multinode-626202
	I0803 23:39:47.199497   46210 main.go:141] libmachine: (multinode-626202-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:fd:83", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:38:01 +0000 UTC Type:0 Mac:52:54:00:d6:fd:83 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-626202-m02 Clientid:01:52:54:00:d6:fd:83}
	I0803 23:39:47.199521   46210 main.go:141] libmachine: (multinode-626202-m02) DBG | domain multinode-626202-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:d6:fd:83 in network mk-multinode-626202
	I0803 23:39:47.199602   46210 host.go:66] Checking if "multinode-626202-m02" exists ...
	I0803 23:39:47.199908   46210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:39:47.199949   46210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:39:47.214638   46210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40775
	I0803 23:39:47.215008   46210 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:39:47.215414   46210 main.go:141] libmachine: Using API Version  1
	I0803 23:39:47.215433   46210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:39:47.215697   46210 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:39:47.215874   46210 main.go:141] libmachine: (multinode-626202-m02) Calling .DriverName
	I0803 23:39:47.216053   46210 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:39:47.216075   46210 main.go:141] libmachine: (multinode-626202-m02) Calling .GetSSHHostname
	I0803 23:39:47.218597   46210 main.go:141] libmachine: (multinode-626202-m02) DBG | domain multinode-626202-m02 has defined MAC address 52:54:00:d6:fd:83 in network mk-multinode-626202
	I0803 23:39:47.218999   46210 main.go:141] libmachine: (multinode-626202-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:fd:83", ip: ""} in network mk-multinode-626202: {Iface:virbr1 ExpiryTime:2024-08-04 00:38:01 +0000 UTC Type:0 Mac:52:54:00:d6:fd:83 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-626202-m02 Clientid:01:52:54:00:d6:fd:83}
	I0803 23:39:47.219028   46210 main.go:141] libmachine: (multinode-626202-m02) DBG | domain multinode-626202-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:d6:fd:83 in network mk-multinode-626202
	I0803 23:39:47.219174   46210 main.go:141] libmachine: (multinode-626202-m02) Calling .GetSSHPort
	I0803 23:39:47.219312   46210 main.go:141] libmachine: (multinode-626202-m02) Calling .GetSSHKeyPath
	I0803 23:39:47.219435   46210 main.go:141] libmachine: (multinode-626202-m02) Calling .GetSSHUsername
	I0803 23:39:47.219528   46210 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-9607/.minikube/machines/multinode-626202-m02/id_rsa Username:docker}
	I0803 23:39:47.300975   46210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:39:47.316429   46210 status.go:257] multinode-626202-m02 status: &{Name:multinode-626202-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:39:47.316462   46210 status.go:255] checking status of multinode-626202-m03 ...
	I0803 23:39:47.316776   46210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:39:47.316822   46210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:39:47.332084   46210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I0803 23:39:47.332433   46210 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:39:47.332884   46210 main.go:141] libmachine: Using API Version  1
	I0803 23:39:47.332908   46210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:39:47.333257   46210 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:39:47.333481   46210 main.go:141] libmachine: (multinode-626202-m03) Calling .GetState
	I0803 23:39:47.335042   46210 status.go:330] multinode-626202-m03 host status = "Stopped" (err=<nil>)
	I0803 23:39:47.335059   46210 status.go:343] host is not running, skipping remaining checks
	I0803 23:39:47.335066   46210 status.go:257] multinode-626202-m03 status: &{Name:multinode-626202-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-626202 node start m03 -v=7 --alsologtostderr: (39.712221727s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-626202 node delete m03: (1.640740847s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (191.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-626202 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0803 23:48:27.619004   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0803 23:50:58.007324   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-626202 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m10.850466817s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-626202 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (191.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-626202
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-626202-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-626202-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.923239ms)

                                                
                                                
-- stdout --
	* [multinode-626202-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-626202-m02' is duplicated with machine name 'multinode-626202-m02' in profile 'multinode-626202'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-626202-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-626202-m03 --driver=kvm2  --container-runtime=crio: (43.598842494s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-626202
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-626202: exit status 80 (209.970312ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-626202 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-626202-m03 already exists in multinode-626202-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-626202-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.68s)

                                                
                                    
x
+
TestScheduledStopUnix (115.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-144411 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-144411 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.58163733s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-144411 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-144411 -n scheduled-stop-144411
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-144411 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-144411 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-144411 -n scheduled-stop-144411
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-144411
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-144411 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-144411
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-144411: exit status 7 (65.035409ms)

                                                
                                                
-- stdout --
	scheduled-stop-144411
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-144411 -n scheduled-stop-144411
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-144411 -n scheduled-stop-144411: exit status 7 (63.689871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-144411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-144411
--- PASS: TestScheduledStopUnix (115.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (225.12s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1419862715 start -p running-upgrade-860380 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1419862715 start -p running-upgrade-860380 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m24.496666545s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-860380 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0804 00:00:58.007552   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-860380 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m16.7607207s)
helpers_test.go:175: Cleaning up "running-upgrade-860380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-860380
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-860380: (1.212590872s)
--- PASS: TestRunningBinaryUpgrade (225.12s)

                                                
                                    
x
+
TestPause/serial/Start (128.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-908631 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-908631 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m8.957270848s)
--- PASS: TestPause/serial/Start (128.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (117.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.862087499 start -p stopped-upgrade-082329 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.862087499 start -p stopped-upgrade-082329 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m10.690633574s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.862087499 -p stopped-upgrade-082329 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.862087499 -p stopped-upgrade-082329 stop: (2.132789106s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-082329 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-082329 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.740616257s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (117.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-082329
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-082329: (1.037990353s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-551054 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-551054 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (61.426827ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-551054] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-551054 --driver=kvm2  --container-runtime=crio
E0804 00:03:27.616650   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-551054 --driver=kvm2  --container-runtime=crio: (48.139040868s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-551054 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-159277 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-159277 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (103.543238ms)

                                                
                                                
-- stdout --
	* [false-159277] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:03:37.598758   58582 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:03:37.598869   58582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:03:37.598878   58582 out.go:304] Setting ErrFile to fd 2...
	I0804 00:03:37.598882   58582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:03:37.599064   58582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-9607/.minikube/bin
	I0804 00:03:37.599571   58582 out.go:298] Setting JSON to false
	I0804 00:03:37.600516   58582 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6362,"bootTime":1722723456,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:03:37.600572   58582 start.go:139] virtualization: kvm guest
	I0804 00:03:37.602673   58582 out.go:177] * [false-159277] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:03:37.603932   58582 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:03:37.603936   58582 notify.go:220] Checking for updates...
	I0804 00:03:37.606245   58582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:03:37.607443   58582 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-9607/kubeconfig
	I0804 00:03:37.608623   58582 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-9607/.minikube
	I0804 00:03:37.610054   58582 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:03:37.611458   58582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:03:37.613274   58582 config.go:182] Loaded profile config "NoKubernetes-551054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:03:37.613416   58582 config.go:182] Loaded profile config "cert-expiration-705918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:03:37.613545   58582 config.go:182] Loaded profile config "kubernetes-upgrade-302198": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:03:37.613667   58582 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:03:37.651743   58582 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 00:03:37.652888   58582 start.go:297] selected driver: kvm2
	I0804 00:03:37.652906   58582 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:03:37.652921   58582 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:03:37.654837   58582 out.go:177] 
	W0804 00:03:37.656306   58582 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0804 00:03:37.657604   58582 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-159277 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-159277

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-159277

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-159277

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-159277

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-159277

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-159277

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-159277

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-159277

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-159277

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-159277

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-159277

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-159277" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-159277" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 04 Aug 2024 00:02:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.231:8443
name: cert-expiration-705918
contexts:
- context:
cluster: cert-expiration-705918
extensions:
- extension:
last-update: Sun, 04 Aug 2024 00:02:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-705918
name: cert-expiration-705918
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-705918
user:
client-certificate: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/cert-expiration-705918/client.crt
client-key: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/cert-expiration-705918/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-159277

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-159277"

                                                
                                                
----------------------- debugLogs end: false-159277 [took: 2.760111419s] --------------------------------
helpers_test.go:175: Cleaning up "false-159277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-159277
--- PASS: TestNetworkPlugins/group/false (3.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-551054 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-551054 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.819770904s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-551054 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-551054 status -o json: exit status 2 (224.524172ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-551054","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-551054
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-551054 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-551054 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.520700686s)
--- PASS: TestNoKubernetes/serial/Start (26.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-551054 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-551054 "sudo systemctl is-active --quiet service kubelet": exit status 1 (192.467464ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.659735174s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-551054
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-551054: (1.276714929s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (25.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-551054 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-551054 --driver=kvm2  --container-runtime=crio: (25.119814836s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (25.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (150.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-118016 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-118016 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (2m30.666301955s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (150.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-551054 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-551054 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.558509ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-877598 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0804 00:05:58.007589   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-877598 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m17.519519132s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-877598 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b6695481-0ca0-446c-b491-4547368cc051] Pending
helpers_test.go:344: "busybox" [b6695481-0ca0-446c-b491-4547368cc051] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b6695481-0ca0-446c-b491-4547368cc051] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004884478s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-877598 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-877598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-877598 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-969068 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-969068 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m39.494963283s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-118016 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e584a9fc-d5c5-44d5-b976-d8354ebe39b1] Pending
helpers_test.go:344: "busybox" [e584a9fc-d5c5-44d5-b976-d8354ebe39b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e584a9fc-d5c5-44d5-b976-d8354ebe39b1] Running
E0804 00:08:27.616121   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004337861s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-118016 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-118016 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-118016 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-969068 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9] Pending
helpers_test.go:344: "busybox" [0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0e630ac1-64f1-49f7-ac4a-71bd1c47fdc9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003818669s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-969068 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-969068 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-969068 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (636.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-877598 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-877598 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (10m36.206264695s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-877598 -n embed-certs-877598
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (636.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-576210 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-576210 --alsologtostderr -v=3: (2.591437676s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-576210 -n old-k8s-version-576210: exit status 7 (64.344627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-576210 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (572.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-118016 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-118016 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (9m32.183310671s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118016 -n no-preload-118016
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (572.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (494.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-969068 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0804 00:13:27.616267   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
E0804 00:15:58.007146   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0804 00:17:21.059049   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
E0804 00:18:27.616056   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/addons-110246/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-969068 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (8m14.457713888s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-969068 -n default-k8s-diff-port-969068
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (494.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-836281 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
E0804 00:34:01.059825   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-836281 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (53.851257164s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-836281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-836281 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.084560005s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-836281 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-836281 --alsologtostderr -v=3: (10.471646526s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-836281 -n newest-cni-836281
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-836281 -n newest-cni-836281: exit status 7 (64.196942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-836281 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (42.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-836281 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-836281 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (41.621600995s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-836281 -n newest-cni-836281
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (42.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-836281 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-836281 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-836281 -n newest-cni-836281
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-836281 -n newest-cni-836281: exit status 2 (250.95539ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-836281 -n newest-cni-836281
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-836281 -n newest-cni-836281: exit status 2 (240.437334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-836281 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-836281 -n newest-cni-836281
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-836281 -n newest-cni-836281
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (102.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m42.055413056s)
--- PASS: TestNetworkPlugins/group/auto/Start (102.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (94.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m34.535035422s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (94.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (130.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0804 00:35:58.007904   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/functional-434475/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m10.636192067s)
--- PASS: TestNetworkPlugins/group/calico/Start (130.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-159277 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-159277 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-brxdq" [5f59542b-5481-40c5-8d9f-d7e0a49269d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-brxdq" [5f59542b-5481-40c5-8d9f-d7e0a49269d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005862841s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hsnjd" [f9148e5f-c4e1-4471-91a9-66761beb1ffa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005216598s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-159277 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-159277 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2jkn2" [4ed4efba-54f6-4fc1-8a87-e7a1c06ce1dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2jkn2" [4ed4efba-54f6-4fc1-8a87-e7a1c06ce1dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004636271s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (83.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m23.371996027s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (83.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-159277 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-159277 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (115.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m55.188607548s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (115.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (113.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0804 00:38:02.042592   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
E0804 00:38:02.047895   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
E0804 00:38:02.058200   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
E0804 00:38:02.078568   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
E0804 00:38:02.119273   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
E0804 00:38:02.199660   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
E0804 00:38:02.360795   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
E0804 00:38:02.681430   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
E0804 00:38:03.321618   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m53.764622202s)
--- PASS: TestNetworkPlugins/group/flannel/Start (113.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vj4qf" [3df54b7b-367c-4d66-baca-020041cb1714] Running
E0804 00:38:04.602346   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
E0804 00:38:07.162649   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.075653821s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-159277 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-159277 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pvmhc" [2e25b30f-f11e-4440-92b1-a855ea3dcb35] Pending
E0804 00:38:12.283169   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-pvmhc" [2e25b30f-f11e-4440-92b1-a855ea3dcb35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pvmhc" [2e25b30f-f11e-4440-92b1-a855ea3dcb35] Running
E0804 00:38:18.287822   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory
E0804 00:38:18.293129   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory
E0804 00:38:18.303453   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory
E0804 00:38:18.323752   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory
E0804 00:38:18.364107   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory
E0804 00:38:18.444459   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory
E0804 00:38:18.604653   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory
E0804 00:38:18.925209   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory
E0804 00:38:19.565919   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory
E0804 00:38:20.846901   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory
E0804 00:38:22.524387   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
E0804 00:38:23.407515   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003415928s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-159277 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0804 00:38:43.005302   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/old-k8s-version-576210/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-159277 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m16.282886324s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-159277 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-159277 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xz9bk" [647dc2ba-e54c-466d-b82b-2d3fec234fcf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0804 00:38:59.249492   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/no-preload-118016/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-xz9bk" [647dc2ba-e54c-466d-b82b-2d3fec234fcf] Running
E0804 00:39:10.668093   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.crt: no such file or directory
E0804 00:39:10.673440   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.crt: no such file or directory
E0804 00:39:10.683755   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.crt: no such file or directory
E0804 00:39:10.704102   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.crt: no such file or directory
E0804 00:39:10.744436   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.crt: no such file or directory
E0804 00:39:10.824796   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.crt: no such file or directory
E0804 00:39:10.985887   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.crt: no such file or directory
E0804 00:39:11.306535   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.crt: no such file or directory
E0804 00:39:11.947035   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.004454261s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-159277 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0804 00:39:13.227857   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-159277 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-159277 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-d9g9r" [afd9de56-b169-4fcd-9772-64fde31a6afb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0804 00:39:51.630765   16795 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/default-k8s-diff-port-969068/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-d9g9r" [afd9de56-b169-4fcd-9772-64fde31a6afb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004072142s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qx8sh" [6a8a6db0-8a59-4317-8958-7226b9961823] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004120645s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-159277 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-159277 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-159277 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5mm99" [ff3df0b6-7b0b-4550-811b-0dc4f004ba22] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5mm99" [ff3df0b6-7b0b-4550-811b-0dc4f004ba22] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.0040467s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-159277 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-db2fw" [18777a4c-b9da-402d-83b3-6e27d3d5a125] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-db2fw" [18777a4c-b9da-402d-83b3-6e27d3d5a125] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.003662443s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-159277 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-159277 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-159277 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-159277 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (40/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
269 TestStartStop/group/disable-driver-mounts 0.14
277 TestNetworkPlugins/group/kubenet 2.64
285 TestNetworkPlugins/group/cilium 3.88
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-423330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-423330
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-159277 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-159277

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-159277

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-159277

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-159277

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-159277

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-159277

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-159277

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-159277

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-159277

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-159277

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-159277

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-159277" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-159277" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 04 Aug 2024 00:02:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.231:8443
name: cert-expiration-705918
contexts:
- context:
cluster: cert-expiration-705918
extensions:
- extension:
last-update: Sun, 04 Aug 2024 00:02:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-705918
name: cert-expiration-705918
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-705918
user:
client-certificate: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/cert-expiration-705918/client.crt
client-key: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/cert-expiration-705918/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-159277

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-159277"

                                                
                                                
----------------------- debugLogs end: kubenet-159277 [took: 2.502346845s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-159277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-159277
--- SKIP: TestNetworkPlugins/group/kubenet (2.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-159277 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-159277" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19364-9607/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 04 Aug 2024 00:02:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.231:8443
name: cert-expiration-705918
contexts:
- context:
cluster: cert-expiration-705918
extensions:
- extension:
last-update: Sun, 04 Aug 2024 00:02:18 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-705918
name: cert-expiration-705918
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-705918
user:
client-certificate: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/cert-expiration-705918/client.crt
client-key: /home/jenkins/minikube-integration/19364-9607/.minikube/profiles/cert-expiration-705918/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-159277

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-159277" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159277"

                                                
                                                
----------------------- debugLogs end: cilium-159277 [took: 3.690758826s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-159277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-159277
--- SKIP: TestNetworkPlugins/group/cilium (3.88s)

                                                
                                    
Copied to clipboard